Wednesday, September 8, 2010

Computing the “Theory of everything” (Strings)


Jack Dikian

String theory was originally developed to describe the fundamental particles and forces that make up our universe. New research, led by a team from Imperial College London, describes the unexpected discovery that string theory also seems to predict the behavior of entangled quantum particles. As this prediction can be tested in the laboratory, researchers can now test string theory.

Before strings

String theory is the most recent attempt to reconcile quantum mechanics and general relativity. It's the first candidate for the theory of everything, a manner of describing the known fundamental forces and matter in a mathematically complete system.

We learned in school that matter is made of atoms, which are in turn made of just three basic components: electrons [spinning] around a nucleus composed of neutrons and protons. In chemistry, some of us also learned that generally, there are as many neutrons as are protons in any nucleus of an atom. There are however isotopes where their nucleus contains an uneven mix of protons and neutrons.

It was quite comfortable for us to think of the atom in this way – somewhat reminiscent of planets revolving around the sun. The students that had a special interest in physics went on to discover that the electron is a truly a fundamental particle (it is one of a family of particles known as leptons), but neutrons and
protons are made of smaller particles, known as quarks. Quarks are, as far as we now know, also elementary particles.

The universe is made up of atoms and forces through which atoms interact.

Our current knowledge about the subatomic composition of the universe is summarized in what is known as the
Standard Model of particle physics. It describes both the fundamental building blocks out of which the universe is made, and the forces through which these blocks interact. There are twelve basic building blocks:-

Six of these are quarks which go by the interesting names of up, down, charm, strange, bottom and top. A proton incidentally, is made of two up quarks and one down.

The other six are
Leptons and include the electron and its two heavier siblings, the Muon and the Tauon, as well as three neutrinos.

There are four fundamental forces in the universe:

§ Gravity,
§ Electromagnetism,
§ the Weak and
§ Strong nuclear forces also known as the colour force.


Each of these is produced by fundamental particles that act as carriers of the force. The most familiar of these is the photon, a particle of light, which is the mediator of electromagnetic forces. (This means that, for instance, a magnet attracts a nail because both objects exchange photons.) The graviton is the particle associated with gravity. The strong force is carried by eight particles known as gluons. Finally, the weak force is transmitted by three particles, the W+, the W-, and the Z.
We all recognize 2 forces very well – gravity and electromagnetism. We feel the pull of gravity and we use the force of gravity in our day to day lives. Some of us have also played with magnets and appreciate their pull force. The behaviour of the week and strong nuclear forces isn’t one that we observe in every day life. These operate at the subatomic level and bind sub particles with varying degree of strength. The strength of the strong nuclear force, for example, is many magnitudes greater than that of gravity on Earth. If the pull of gravity on was that of the strong nuclear force – we would weigh many trillions more than our current weight.

We use what we call the Standard Model to describe the interaction of sub-particles and forces with great success. This is, however, with one notable exception - gravity. The gravitational force has proven very difficult to describe microscopically. This has been for many years one of the most important problems in theoretical physics. That is to formulate a single model that describes both the micro and macro elements of the universe. Einstein attempted to unify the general theory of relativity (macro) with electromagnetism using a single field, hoping to recover an approximation for quantum theory. A "theory of everything" is closely related to unified field theory, and also attempts to explain all physical constants of nature.

Strings

In the last 20 or so years string theory has emerged as a promising model in attempts to provide a complete, unified, and consistent description of the fundamental structure of our universe, another “
Theory of Everything”.

The basic idea is that all of the components of the Standard Model are just different manifestations of one basic element - a string. One way to think about this is by imagining an electron to be a tiny loop of string rather than a single zero-dimensional point. The loop (string) can as well as moving, oscillate in different ways. It is the way strings oscillate that determine the type of subatomic building blocks we recognize. So one kind of oscillation may be an electron whilst another oscillation may be regarded as a photon. This means, if true

The entire universe is made of strings

In recent years many developments have taken place, radically improving our understanding of what the theory is. String theories also require the existence of several extra, unobservable, dimensions to the universe, in addition to the usual four space-time dimensions.

Five major string theories have been formulated with the main differences between them, being the number of dimensions in which the strings are developed within and their characteristics. In the mid 1990s a unification of all previous superstring theories, called M-theory, has been proposed, which asserted that strings are really 1-dimensional slices of a 2-dimensional membrane vibrating in 11-dimensional space.

Monday, August 9, 2010

The Rubik Cube and God's Algorithm



Jack Dikian
Sep 2006


Introduction


In 1974 Erno Rubik, an admirer of geometry and 3-D forms created the world's most perfect puzzle. More than 30 years later the Rubik's cube is still a best selling brainteaser. By the mid-eighties almost every child had the puzzle. Did I say “Child”? I certainly was playing with one when I was about 12 or 13 with little success in solving.

So you can imagine our astonishment when a whole first year pure mathematics class was asked to go out and purchase the puzzle from the university co-op bookshop. I think, like others, I had always suspected that there would a mathematical approach to solving the puzzle. After all, many real-world, albeit, obscure problems such as the four-color map(1) and The Seven Bridges of Konigsberg(2) inspired rigorous mathematical solutions. I, off course didn’t have the mathematical repertoire – instead the brute force method, not in the mathematical sense, was always adopted.

Background

The Rubik’s Cube has a straightforward basis. The faces of the cube are covered by nine stickers in six colours. The puzzle is solved, when each face is of one solid colour. When you start to rotate the rows and columns and see the different mix of colour rows/columns you begin to appreciate the difficulty involved. There is, in-fact, 43 million million possible pattern combinations - but just one right one.

it's been proven that you can solve a Rubik's cube in 26 moves. Computer science professor Gene Cooperman and graduate student Dan Kunkle of the Northeastern University in Boston used algebra and fast parallel computing to show that no matter how scrambled the cube is, it is possible to solve (generate a cube with solid colors on each face, or the home state) in 26 steps. There are however claims that it can be solved in 22 or even 20 steps (see later).

The Rubik's cube has served as a guinea pig for testing techniques to solve large-scale combinatorial problems. "The Rubik's cube has been a testing ground for problems of search and enumeration. Search and enumeration is a large research area encompassing many researchers working in different disciplines — from artificial intelligence to operations. The Rubik's cube allows researchers from different disciplines to compare their methods on a single, well-known problem.

Mathematically

There are many algorithms to solve scrambled Rubik's cubes. It is not known how many moves is the minimum required to solve any instance of the Rubik's cube, although the latest claims put this number at 22. This number is also known as the diameter of the Cayley graph of the Rubik's Cube group. An algorithm that solves a cube in the minimum number of moves is known as 'God's algorithm'. Most mathematicians think it may only takes 20 moves to solve any Rubik's cube – and it’s a question of proving this to be true.

Any state of the cube corresponds to a permutation of its facelets — a rule that assigns to each facelet in the home state a new location on the cube in the scrambled state. Not every permutation of facelets can be achieved by a sequence of moves — for example no move can shift the centre facelet of a face — so when trying to solve Rubik's cube you restrict your attention to a collection of legal permutations.

These form a self-contained system. If you follow one legal permutation by another, the end result is also a legal permutation. If you've done one legal permutation, you can always find another one — namely the reverse of what you've just done — that will get you back to the position you started with. Doing absolutely nothing to the cube is also a legal move and corresponds to the permutation that leaves every facelet where it is, known to mathematicians as the identity permutation.

The problem of solving Rubik's cube can be visualised using what is called the group's Cayley graph — a network whose nodes are the legal states of the cube (corresponding to legal permutations) and which has two nodes linked up if you can get from one to the other by a legal move, which, incidentally, is itself a permutation. The home state of the cube corresponds to one of the nodes (and the identity permutation) and solving the cube corresponds to finding a path from one of the nodes to the home state along a sequence of linked-up nodes.

A brute-force approach to showing that you can always solve the cube in N moves would be to show that no such path involves more than N steps. This sounds good in theory, but there is a huge problem as the Cayley graph of the group of legal permutations has 43,252,003,274,489,856,000 nodes, a number that challenges even the fastest of supercomputers.

The (3 x 3 x 3) Rubik's Cube has eight corners and twelve edges. There are 8! (40,320) ways to arrange the corner cubes. Seven can be oriented independently, and the orientation of the eighth depends on the preceding seven, giving 37 (2,187) possibilities.

There are 12!/2 (239,500,800) ways to arrange the edges, since an odd permutation of the corners implies an odd permutation of the edges as well. Eleven edges can be flipped independently, with the flip of the twelfth depending on the preceding ones, giving 211 (2,048) possibilities.

There are exactly 43,252,003,274,489,856,000 permutations. The full number is 519,024,039,293,878,272,000 or 519 quintillion possible arrangements of the pieces that make up the Cube, but only one in twelve of these are actually solvable. This is because there is no sequence of moves that will swap a single pair of pieces or rotate a single corner or edge cube.

(1) This problem is sometimes also called
Guthrie's problem after F. Guthrie, who first conjectured the theorem in 1852. The was then communicated to de Morgan and thence into the general community. In 1878, Cayley wrote the first paper on the conjecture.

(2) One of these areas is the topology of networks, first developed by
Leonhard Euler in 1735. His work in this field was inspired by the following problem: The Seven Bridges of Konigsberg.

Friday, August 6, 2010

Virtual Reality Technology and Alzheimer's Disease




The use of virtual reality technology in the assessment, study of, and possible assistance in the rehabilitation of memory deficits associated with patients with Alzheimer disease.


Background

Alzheimer's disease is the common cause of dementia, and is particularly common in older people. Because it is the most common cause of dementia, Alzheimer's disease is commonly equated with the general term dementia. However, there are many other causes of dementia. Alzheimer's disease is therefore a specific form of dementia having very specific microscopic brain abnormalities.

Alzheimer disease is not merely a problem of memory. Additional mental and behavioral problems often affect people who have the disease, and may influence quality of life, caregivers, and the need for institutionalization.

Depression for example affects 20–30% of people who have Alzheimer’s, and about 20% have anxiety. Psychosis (often delusions of persecution) and agitation/aggression also often accompany this disease. Common symptoms and comorbidities include:

• Mental deterioration of organic or functional origin

• Loss of cognitive ability in a previously-unimpaired person, beyond what might be expected from normal aging. Areas particularly affected include memory, attention, judgement, language and problem solving.

• Loss (usually gradual) of mental abilities such as thinking, remembering, and reasoning. It is not a disease, but a group of symptoms that may accompany some diseases or conditions affecting the brain.

• Deteriorated mental state due to a disease process and the result from many disorders of the nervous system.

• Cognitive deficit or memory impairment due to progressive brain injury.

Distinguishing Alzheimer's disease from other causes of dementia is not always as easy and straightforward as defining these terms. In practice, people and their disorders of behaviour, or behaviours of concern are far more complex than the simple definitions sometimes provided.

Establishing patient history, abilities, the natural course of disorder development such as that involving short-term memory, speech and language, personality, decision-making and judgment, and others is often needed in the diagnosis of the disease. Routine diagnostic steps therefore include a careful history, mental status screening, laboratory and imaging studies, and neuropsychologic testing.

Differential diagnosis of Alzheimer's disease

It is sometimes difficult to differentiate dementia caused by Alzheimer's from delirium and in addition several features distinguish dementia from depression, but the two can coexist and the distinction may be uncertain.

Whilst prominent motor signs such as Gait disturbance is a characteristic feature of patients with vascular dementia - In contrast, the NINCDS-ADRDA criteria for Alzheimer's disease state that: `gait disturbance at the onset or very early in the course of the illness' makes the diagnosis of probable Alzheimer's disease uncertain or unlikely. However, clinical studies suggest that gait disturbance is not restricted to the later stages of Alzheimer's disease. Also, studies have identified abnormalities of gait and balance in patients with early Alzheimer's disease(1).

It had been thought that Dementias without prominent motor signs included Alzheimer's disease, frontotemporal dementia, and Creutzfeld-Jakob, and others and the clinical pattern of gait disturbance in patients with early Alzheimer's disease has attracted less attention to date.

Diagnosis

Differential diagnosis between the types of dementia and treatments available for Alzheimer's - while limited in their effectiveness usually have best patient outcomes when begun early in the course of the disease. Diagnosis and/or diagnostic tools include:

Taking medical history.

Physical examination including evaluations of hearing and sight, as well as blood pressure and pulse readings, etc.


Standard laboratory tests including blood and urine tests designed to help eliminate other possible conditions.

Neuropsychological testing including assessing memory, problem-solving, attention, vision-motor coordination and abstract thinking, such as performing simple calculations.


Brain-imaging or structural brain scan such as CT or MRI to help rule out brain tumors or blood as the reason for symptoms and more recently


The use of virtual reality

The use of virtual reality technology in the assessment, study of, and possible assistance in the rehabilitation of memory deficits associated with patients with Alzheimer disease.

Using virtual reality to simulate real-word environments and test patient’s ability to navigate these environments. Work has been carried out to compare previously described real-world navigation tests with a virtual reality version simulating the same navigational environment (2). Authors of this research work conclude that virtual navigation testing reveals deficits in aging and Alzheimer disease that are associated with potentially grave risks to patients and the community.

In another study in the United Kingdom (3), researchers’ aimed to examine the feasibility of virtual reality technology for use by people with dementia (PWD). Data was obtained directly from six PWD regarding their experiences with a virtual environment of a large outdoor park. A user-centered method was developed to assess:

(a) presence;
(b) user inputs;
(c) display quality;
(d) simulation fidelity; and
(e) overall system usability.

The extent to which PWD could perform four functional activities in the virtual enviroment was also investigated (e.g., mailing a letter). In addition, physical and psychological well-being of PWD while interacting with the virtual environment was assessed objectively by recording heart rate during the virtual reality sessions and subjectively with discrete questionnaire items and real-time prompts.

(1) Gait disturbance in Alzheimer's disease: a clinical studyS.T. O'Keeffe, H. Kazeem, R.M. Philpott, J.R. Playfer, M. Gosney, M. Lye July, 1996

(2) NEUROLOGY 2008;71:888-895
Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality
Laura A. Cushman, PhD, Karen Stein and Charles J. Duffy, MD, PhD
From the Departments of Neurology, Brain and Cognitive Sciences, Neurobiology and Anatomy, Ophthalmology, and Psychiatry (L.A.C.) and Center for Visual Science,
The University of Rochester Medical Center, Rochester, NY.

(3) Flynn D, van Schaik P, Blackman T, Femcott C, Hobbs B, Calderon C.
School of Social Sciences and Law, The University of Teesside, Middlesbrough, United Kingdom


Thursday, August 5, 2010

Quantum-connected computers overturn the uncertainty principle


Introduction


Back in the mid eighties when completing a pure mathematics degree and using the then state of the art mini-computers, the PDP11 family of processors (we couldn’t afford the services of the more, much more, brawny nitrogen-cooled Crays to solve sparse Hadamard matrices - I contributed an article in a UK computer journal discussing the future of computers, artificial intelligence, human interfaces and visualization – I guess you can call it a naive attempt to predict how systems will/MAY evolve. Of course this was through my own lens of experience. Watching processor speeds rapidly increasing and memory, disk and everything else growing almost exponentially.

Of course the implicit question was - If all that has happened in the first 50 years of computer history, what will happen in the next 50 or so years?

Moore's Law is an empirical formula describing the evolution of processors which is often cited to predict future progress in the field, as it's been proved quite accurate in the past: it states that the transistor count in an up-to-date processor will double each time every some period of time between 18 and 24 months, which roughly means that computational speed grows exponentially, doubling every 2 years. As processors become faster the science of computability, amongst other things describes a class called 'NP-hard problems' which are also sometimes referred to 'unacceptable', 'unsustainable' or 'binomially exploding' whose complexity and therefore computation grow exponentially with time.

An example of NP-hard algorithm is the one of finding the exit of a labyrinth: it doesn't require much effort if you only find one crossing, but it gets much more demanding in terms of resources when the crossings become so large that it becomes either impossible to compute because of limited resources, or computable, but requiring an unacceptable amount of time.

Many, if not all, of the Artificial Intelligence related algorithms are extremely demanding in terms of computational resources because they are either NP-hard or involve combinatorial calculus of growing complexity.

Not all developments in processing architecture stem from a single genesis. For example, recently, IBM researchers have made huge strides in mapping the architecture of the Macaque monkey brain. They have traced long-distance connections in the brain - the "interstate highways" which transmit information between distant areas of the brain. Their maps may help researchers grasp how and where the brain sends information better than ever before, and possibly develop processors that can keep up with our brain's immense computational power and navigate its complex architecture.

Artificial intelligence and cognitive modeling try to simulate some properties of neural networks. While similar in their techniques, the former has the aim of solving particular tasks, while the latter aims to build mathematical models of biological neural systems.

Another trajectory – that of Quantum Computers

The uncertainty principle is a key underpinning of quantum mechanics. A particle's position or its velocity can be measured but not both. Now, according to five physicists from Germany, Switzerland, and Canada, in a letter abstract published in Nature Physics(1) quantum computer memory could let us violate this principle

Paul Dirac who shared the 1933 Nobel Prize in physics with Erwin Schrödinger, "for the discovery of new productive forms of atomic theory” provided a concrete illustration of what the uncertainty principle means. He explained that one of the very, few ways to measure a particle's position is to hit it with a photon and then chart where the photon lands on a detector. That gives you the particle's position, yes, but it's also fundamentally changed its velocity, and the only way to learn that would consequently alter its position.

That's more or less been the status quo of quantum mechanics since Werner Heisenberg first published his theories in 1927, and no attempts to overturn it - including multiple by Albert Einstein himself - proved successful. But now the five physicists hope to succeed where Einstein failed. If they're successful, it will be because of something that wasn't even theorized until many years after Einstein's death: Quantum Computers.

Key to quantum computers are qubits, the individual units of quantum memory. A particle would need to be entangled with a quantum memory large enough to hold all its possible states and degrees of freedom. Then, the particle would be separated and one of its features measured. If, say, its position was measured, then the researcher would tell the keeper of the quantum memory to measure its velocity.

Because the uncertainty principle wouldn't extend from the particle to the memory, it wouldn't prevent the keeper from measuring this second figure, allowing for exact, or possibly, for obscure mathematical reasons, almost exact measurements of both figures.

It would take lots of qubits - far more than the dozen or so we've so far been able to generate at any one time - to entangle all that quantum information from a particle, and the task of entangling so many qubits together would be extremely fragile and tricky. Not impossibly tricky, but still way beyond what we can do now.


(1) Nature Physics
Published online: 25 July 2010 doi:10.1038/nphys1734
The uncertainty principle in the presence of quantum memory
Mario Berta, Matthias Christandl, Roger Colbeck, Joseph M. Renes & Renato Renner

Sunday, May 23, 2010

A message into the future and even the past


Jack Dikian

ABSTRACT

Sending messages into the future and whatabout sending messages into the past

Bell’s theorem shows that there are limits that apply to local hidden-variable models of quantum systems, and that quantum mechanics predicts that these limits will be exceeded by measurements performed on entangled pairs of particles. This article discusses Bell’s theorem in the context of experiments that show that the predictions of quantum mechanics are consistent with the results of experiments, and inconsistent with local hidden variable models of quantum mechanics.

A series of experiments has demonstrated the quantum predictions that form the basis of Bell's Theorem and some would therefore claim that not only the predictions of quantum theory but also experimental results now prove, using Bell's theorem, that the universe must violate either locality or counterfactual definiteness.

So, basically, if entanglement through a spin placed on a particle results in another spinning in the opposite direction in exactly the same way and at exactly the same time no matter the distance between them – this may make for instantaneous communication, across in theory, any distance.

If we now consider learning’s from the twin paradox (see special relativity) in which a twin makes a journey into space in a high-speed rocket and returns home to find he has aged less than his identical twin who stayed on Earth. I.e. the twin, and everything else have aged at a much faster rate than him. He will essentially have traveled forward in time.

What if a particle is accelerated at a sufficient enough rate so that it travels forward in time, and at the same time second particle in the entanglement-pair is spun. Are we then essentially sending a message into the future.

Wednesday, April 7, 2010

Unsolvable Problem















Jack Dikian
October 1999


The Halting Problem is one of the simplest problems known to be unsolvable. Given a program and an input to the program (input-program pair), determine if the program will eventually stop when it is given that input.

Turing pondered if there was a way of telling in general once a computer has embarked on a calculation whether that calculation will terminate in an answer. This problem is known as the "Halting Problem for Turing Machines" and was first proved in the 1937 paper in which he described his machines.

My interest in this was caught when writing a Turing Machine simulator and was fascinated by the seemingly simple challenge – and Turing’s elegant solution.

Introduction

Briefly, a Turing machine can be thought of as a black box, which performs a calculation of some kind on an input number. If the calculation reaches a conclusion, or halts then an output number is returned. Otherwise, the machine theoretically just carries on forever. The problem is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

There are an infinite number of Turing machines, as there are an infinite number of calculations that can be done with a finite list of rules. Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. We say that the halting problem is undecidable over Turing machines.
Break here

Download author's Turing Machine simulator and article "Halting Problem Made Simple, October 1999"




Tuesday, April 6, 2010

Apparatus For Removing Hidden Lines from Bezier Surfaces




Jack Dikian

November 2000

This paper describes the work carried out by the author in implementing an apparatus for removing hidden lines from Bezier surfaces on the Tektronix storage tube technology of the early 80’s.

Bézier surface

A Bézier surface is formed as the cartesian product of the blending functions of two orthogonal Bézier curves. Bézier surfaces, first described in the early 60’s by the French engineer Pierre Bézier and used in automobile body design.

Hidden lines

When rendering a three dimensional surface on a two dimensional plane such as a computer screen, lines which should otherwise not seen by the viewer must be removed. The shape of the surface, if opaque, should not be cluttered by overlapping lines. Importantly, our real world experience does not allow for us to “see” through what is a solid surface or object.

In order to remove these lines, hidden line algorithms are applied in the surface rendering software to create a wire-frame which contains only visible lines and hides the lines covered by the surface.

There are a number of algorithms used to remove hidden lines. Arthur Appel’s work at IBM in the late 1960’s for example works by propagating the visibility from a segment with a known visibility to a segment whose visibility is yet to be determined. By a comparison of the two following images, the line removal algorithm can be seen at work as the wireframe representation of the surface shaded object removes the lines which are not in view.

Whilst much of the initial work in hidden line removal was done by Arthur Appel, the field is still growing as there are exceptions when his algorithm is not effective. There is a variety of other algorithms which are implemented in computer-assisted design such as the object-precision algorithms of Weiss and Galimberti/Montenari and the image-precision algorithms Encarnacao (priority-edge intersection test and scan grid – point/surface test), Warnock, and Watkins.

The Approach

Thursday, March 25, 2010

Peephole Optimization


Jack Dikian

January 1992

Introduction

In the late 70’s and early 80’s when the 8086 (The IBM PC) was running at 4.77Mhz and incapable of addressing more than 16MB of hardware memory (in real mode) – and even when minicomputers of the day such as the 16bit PDP11/xx range where running at around 15Mhz with a largely orthogonal instruction-set architecture and little I/O instructions – those involved in compiler design and implementation, such as myself working mainly with early versions of Unix sought to reduce both the compilation time and execution time of programs in order to combat the limitations of hardware of the time.

Successful compiler optimizers were required to produce a faster executable code, moderate compilation time and effective use of resources such as memory. Understanding compiler optimization was essential, particularly when changes to a programming construct or to an instruction set was required – the changes to the compilation process must be taken into consideration.

Instruction execution on any given processor is performed at the machine level, often using a machine language or machine code. However, most code is written in a higher-level language such as LISP, C or C++. Therefore, the important role of optimizing code is normally done by compilers, translating high-level languages into machine code. Given that system architectures vary dramatically, how code is executed against anyone processor class also varies greatly. For example, the Pentium processor has two separate integer execution units, called the U-pipe and the V-pipe. The U-pipe can execute any instruction while the V-pipe can only execute a subset. Programs that take this into account, for instance, can structure themselves in ways to achieve improved performance over those that do not optimise themselves in this manner.

Particular interest to the author was the "back end" compiler optimization techniques such as the peephole methods, and the exploration of the relationship between compiler optimization and the given architecture and design.

Compiler Optimization

Most compilers perform lexical, syntactic, and semantic analysis, code optimization, and code generation, in this order. Code optimization includes common sub-expression elimination, constant folding, code motion, and strength reduction.

Code generation includes register allocation, code selection, and perhaps some peephole optimization. Most optimizers treat a high-level intermediate code to avoid machine dependencies and most peephole optimizers are machine specific and ad hoc, so they are normally confined to correcting a small sample of patterns that cannot be more easily corrected before code generation.

Peephole optimization

Generally, peephole optimization is used to mean the pattern matching and conditional replacement performed on small set of intermediate instructions. The circular dependence between the code generation phases implies that local optimals are rarely global optimals. Some strategies that may be adopted are:

  • Accept the local optimal
  • Develop intermediate goals whose achievement suggest global optimality
  • Retain the choices so that the decisions can be made later
  • Optimize the object by replacing awkward or overly constrained segments of code with improved ones.

As mentioned, peephole optimization is performed over a small set of instructions in a segment of generated code. The set is called a "peephole" or a "window". The peephole optimizer works by recognising sets of instructions that perform a NULL outcome (they do nothing), or that can be replaced by more efficient set of instructions. The most common techniques used in peephole optimization include:

  • Evaluating constant sub-expressions in advance.
  • Replacing slow operations with faster equivalents.
  • Deleting useless operations
  • Replacing several operations with one equivalent.
  • Use algebraic laws to simplify instructions.
  • Using special purpose instructions designed for special operand cases.


Peephole optimization examples

Treatment of single instructions with a constant operand

l A * 2 = A + A

l A * 1 = A

l A * 0 = 0

l A / 1 = A

Replacing the Multiply operation with a Shift

l A := A * 4;

Can be replaced by 2-bit left shift (signed/unsigned)


Folding jumps to jumps

A jump to an unconditional jump can copy the target address

JNE label 1

...

Label 1: JMP Label 2

Can be replaced by

JNE Label 2

As a result, Label 1 may become dead (unreferenced)

Jump to Return

A jump to a Return can be replaced by a Return

JMP label 1

...

Label 1 RET

Can be replaced by

RET

Label 1 may become dead code

Wednesday, March 24, 2010

Parenthesised Road Of Lisp



Jack Dikian

December 1998

Introduction

Back in the 1980’s the author was involved in writing a formal (limited) grammar to support the development of an “expert” system to deal with the diagnosis and configuration of Unix file systems.

When reviewing a number of artificial (programming) languages we become interested in LISP. We realised that it’s strong expressive language for stating computational linguistics and flexible data constructs might meet the needs of yet to be fully defined project.

We were particularly interested in manipulating symbols (words, parts of speech) and structured objects (sequences) and perform express operations on them without having to worry about how these “natural” concepts are actually represented in the machine. Also, the ability for routines to call themselves with no restrictions as found in the recursion techniques of LISP.

Our need to understand LISP took us down a very parenthesised road. This is a very brief introduction of what we saw in LISP.

What is LISP

Lisp is a programming language developed as a mathematical notation for computer programs, greatly influenced by the work of Alonzo Church's lambda calculus and become, pretty much, the programming language staple for artificial language research. Lisp derives its name from LIST Processing and it is no surprise that linked lists is one of LISP’s key data constructs. LISP provided a basis or became a genesis for significant concepts in early computer science thinking. LISP’s ability to type data objects instead of variables gives it the flexibility to provide native support for arranging data into a hierarchy based on subset relationship.

Linked lists are one of Lisp’s major data structures, and Lisp source is itself made up of lists. As a result, Lisp programs can manipulate source as a data structure, giving rise to both grammar parsing and even creating new syntax. The inter-changeability of source (formal grammar defining language) and data gives LISP its unique syntax. All source is written as s-expressions, or parenthesized lists. A call to a routine is written as a list with the operator's name first, and the arguments following; for instance, a function F that takes 2 arguments might be called using (f x y).


Why LISP

The biggest advantage LISP has over other languages such as C, C++, and PASCAL is due to the idea that LISP follows the philosophy that what's good for the language's designer is also good for the language's users. Consequently, the programmer, unlike in other languages, can add features to the language making it a self-hosting language. LISP tends to provide a more transparent map between ideas about how the program functions and its source. This is therefore ideal for situations where problems are partially specified and/or where problems whose nature is not fully understood at the outset. Approximation can be developed and refined interactively over time.

Also where a problem involves multiple representations of data from varying sources, LISP's flexible data abstraction system makes configuring robust systems simpler.

The Parentheses ( )

For people like me who was used to looking at C, Pascal, and even Fortran. LISP was initially confusing. Its syntax was nothing like I had seen before (perhaps with the exception for APL which was itself mathematically inspired and served as a executable notation for mathematical algorithms). Parentheses are the basic building blocks of LISP. All functions, operations, and scripts take place within parentheses. Complete functions have a matching number of open and closed parentheses.

When merging lists

(APPEND `((A B)(C D)))

Answer (A B C D)


Creates a function called ADDIT that takes two parameters x and y and adds them.

(DEFUN ADDIT (x y) (+ x y))

(ADDIT 3 4)
Answer 7

Creates a function for calculating powers

(defun expon (base powr)
(cond ((zerop powr) 1)
((evenp powr) (expon (* base base ) (truncate powr 2)))
(t (* base (expon (* base base) (truncate powr 2))))))