Wednesday, November 2, 2011

Fermat's last theorem and the simpsons

Jack Dikian
November 2011


The theorem states that no three positive integers a, b, and c can satisfy the equation an + bn = cn for any integer value of n greater than two. Fermat's last theorem has been one of the most difficult problems to solve. It took over 350 years to solve and has precipitated more incorrect proofs than any other math problem.

Fermat claimed to have proved this statement but that the margin was too narrow to contain it. It is the seeming simplicity of the problem, coupled with Fermat's claim to have proved it, which has captured the hearts of so many mathematicians.
In 1995 Andrew Wiles released his 108-page proof after spending almost 7 years developing it. The proof is regarded to be one of the most complicated in mathematics and involved (chapters in Wiles’ paper) Galois representations, cohomology groups, Gorenstein property, congruences between Hecke rings, Selmer group, elliptic curves, Gorenstein rings and local complete intersections.

Fermat's last theorem even entered the fictional world of The Simpsons. In the episode “Treehouse of Horror VI” a sum, proved impossible by the theorem, the equation ‪178212 + 184112 = 192212 is visible. The joke being that the twelfth root of the sum does evaluate to 1922 due to rounding errors. Note that the left hand side is odd, while ‪192212 is even, so the equality cannot hold. Instead of 1922, it actually is 1921.999999995.


Wednesday, October 5, 2011

The MINI-SCAMP MICROCOMPUTER


Jack Dikian
October 2011

Today, like many others, I couldn’t but resist having a look at the new iPhone 4S with all the expected hype and media frenzy. I was particularly interested in its specifications, and more specifically the new processor chipset it is using.

According to reports the new iPhone is using the A5 chip, which is also used by the iPad 2. This is a dual-core Cortex A9 processor which is said to be up to twice the speed of its predecessor. Also, the PowerVR SGX543MP GPU embedded graphics accelerator is up to seven times faster than the GPU found inside the previous chipset. The A5 contains a rendition of chip based upon the dual-core ARM Cortex-A9 MPCore CPU manufactured by Samsung.

Now the chips’ manufacture claim that the performance optimized dual-core Cortex A9 can do 10,000 DMIPS (Dhrystone MIPS) at clock speeds of 2000 MHz. Remember that CPU performance is generally about the number of instructions it can execute in a time period (say a second) and the amount of actual work it can do in that period.

The first is controlled by CPU architecture, memory speed, etc while the second has those as variables, as well as the effectiveness of its instruction set at doing the sort of typical application it is used in. The Cortex A9 processor runs the Dhrystone benchmark at about 2.50 DMIPS/MHz per core – an extremely efficient architecture at the sort of application the Dhrystone measures.

The reason I’m taking an interested in this architecture isn’t so much about how Apple is using these processors as enablers in there consumer products, not that I’m complaining as I am and have been an Apple fan from the day I first used the Apple Lisa at university. More so, because I was recently reminded of the very first computer I built when I was a teenager. At the time, anyone that was half-interested in electronics either built a micro-computer or an Amplifier of sort. My friends and I were more interested in wire-wraps, logic gates, LEDs, and nasty RF-modulators. The very slow and erratic cassette tape interface came much later thanks to Radio Shack and the TRS-80.

So as high school kids, my friends and I would catch the train from North Sydney to an electronics hobby store in Hornsby where we got to see an “expert” demonstrate his Mini-Scamp microcomputer and us using what little money we had to buy the components so that we can each build our own.

The Mini-Scamp micro-computer was, I think, a Dick Smith Electronics kit the design of which was published in "Electronics Australia". The Mini-Scamp was based on the SC/MP CPU from National Semiconductors and boasted a [massive] 256 bytes of RAM. Yes, that’s all the memory it had. Just as a simple comparison my current iPhone 4 has 3,435,9738,368 bytes of memory.

This nifty computer didn’t come with ROM so the idea of accessing an interpreter or a compiler was still, for us, years away. So we would load binary into RAM by requesting the data byte and address in binary using toggle switches. Pressing the deposit button stored the byte in memory. The LEDs showed the current contents of the memory location. After the program was entered this way a switch was flipped from DMA to Run mode and the micro did the rest.

These days I sometimes have a little chuckle to myself when an IT helpdesk puts me on hold while trying to rectify a trivial password glitch – and wonder how they would cope if all they have to work with is a soldering iron, a bit of copper printed circuit board, and if really lucky a second hand CRO.

Friday, July 1, 2011

Functional Grammar, Dikian Functional Grammar and Anaphora



Jack Dikian
July 2011

Years ago I was involved in building a natural language processor which was to overlay an expert system dealing with the Pascal implemented on a PDP-11. For those not familiar with Pascal, it suffices to say here that it was an ideal teaching language - encouraging good use of structured programming and data structuring. Niklaus Wirth published it in 1970.


So, by far the biggest challenge was to efficiently convert natural language expressions into machine-interpretable canonical representations. An ability to perform content analysis, fact-finding, question answering, inference, translation, and sense making, was necessary. Soon, the obvious became clear, that is the universe of features found in English (as in any natural language) is large, complex and potentially unbounded. With great effort, a simplified grammar and reduced set of words we produced a recognizer of sort.


The reason I recount this is because of my interest in functional grammar as taught in schools and in particular Kaplan and Bresnan’s work in Lexical Functional Grammar and their treatment of functional structure analysis when looking at syntax. For example how the function structure encodes linguistic information about the various functional reactions between parts of a sentence.

Now, I know much has been said about the benefits of teaching, and using what has been a controversial program - functional grammar. Years ago, teachers in particular who understand functional grammar were keen to see it maintained and used side-by-side with traditional grammar. Words such as "participant", "process" and "lexical chain" would sit as comfortably as noun and verbs thus providing a way of describing how language is used, what language does and how it is applied.

Functional grammar emphasises the ways in which language functions to assist meaning, but also relies upon knowledge, understanding and the use of terms of traditional grammar see Fig 1.

See also:

Deep Anaphora, Surface Anaphora, and Null Anaphora

Thursday, June 30, 2011

Gödel's Letter To John



Jack Dikian
June 2011

Whilst re-posting a review of Douglas
Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid, I visited a computability topic long been of interest - The P=NP question and Gödel’s famous 1956 letter to John von Neumann (click the image for the hand written german text)

The P=NP question is one of the great problems of science, which has intrigued computer scientists and mathematicians for decades. In the first half of this century, work on the power of formal systems led to the formalization of the notion of algorithm and the realization that certain problems are algorithmically unsolvable.

NP stands for Non-deterministic Polynomial time meaning that a problem can be solved in Polynomial time using a Non-deterministic Turing machine (like a regular Turing machine but also including a non-deterministic "choice" function). Basically, a solution has to be testable in poly time. If that's the case, and a known NP problem can be solved using the given problem with modified input (an NP problem can be reduced to the given problem) then the problem is NP complete.

The main thing to take away from an NP-complete problem is that it cannot be solved in polynomial time in any known way. NP-Hard/NP-Complete is a way of showing that certain classes of problems are not solvable in realistic time.

The Gödel Letter1

Princeton, 20 March 1956

Dear Mr. von Neumann:

With the greatest sorrow I have learned of your illness. The news came to me as quite unexpected. Morgenstern already last summer told me of a bout of weakness you once had, but at that time he thought that this was not of any greater significance. As I hear, in the last months you have undergone a radical treatment and I am happy that this treatment was successful as desired, and that you are now doing better. I hope and wish for you that your condition will soon improve even more and that the newest medical discoveries, if possible, will lead to a complete recovery.

Since you now, as I hear, are feeling stronger, I would like to allow myself to write you about a mathematical problem, of which your opinion would very much interest me: One can obviously easily construct a Turing machine, which for every formula F in first order predicate logic and every natural number n, allows one to decide if there is a proof of F of length n (length = number of symbols).

Let ψ(F,n) be the number of steps the machine requires for this and let φ(n) = maxF ψ(F,n). The question is how fast φ(n) grows for an optimal machine. One can show that φ(n) ≥ k n. If there really were a machine with φ(n) k n (or even k n2), this would have consequences of the greatest importance. Namely, it would obviously mean that in spite of the undecidability of the Entscheidungs problem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine. After all, one would simply have to choose the natural number n so large that when the machine does not deliver a result, it makes no sense to think more about the problem. Now it seems to me, however, to be completely within the realm of possibility that φ(n) grows that slowly. Since it seems that φ(n) ≥ k n is the only estimation which one can obtain by a generalization of the proof of the undecidability of the Entscheidungs problem and after all φ(n) k n (or k n2) only means that the number of steps as opposed to trial and error can be reduced from N to log N (or (log N)2).

However, such strong reductions appear in other finite problems, for example in the computation of the quadratic residue symbol using repeated application of the law of reciprocity. It would be interesting to know, for instance, the situation concerning the determination of primality of a number and how strongly in general the number of steps in finite combinatorial problems can be reduced with respect to simple exhaustive search.

I do not know if you have heard that “Post’s problem”, whether there are degrees of unsolvability among problems of the form ( y) φ(y,x), where φ is recursive, has been solved in the positive sense by a very young man by the name of Richard Friedberg. The solution is very elegant. Unfortunately, Friedberg does not intend to study mathematics, but rather medicine (apparently under the influence of his father). By the way, what do you think of the attempts to build the foundations of analysis on ramified type theory, which have recently gained momentum? You are probably aware that Paul Lorenzen has pushed ahead with this approach to the theory of Lebesgue measure. However, I believe that in important parts of analysis non-eliminable impredicative proof methods do appear.

I would be very happy to hear something from you personally. Please let me know if there is something that I can do for you. With my best greetings and wishes, as well to your wife,

Your very devoted,

Kurt Gödel

P.S. I congratulate you heartily on the … 2


1. The letter was indeed originally written in German. Mike Sipser’s translation can be found in “The History and the Status of the P versus NP Question”, in the 24th STOC proceedings, 1992, pp. 603-618. This also contains a number of other historical quotes. Peter Clote also made a translation to English, which can be found in the book “Arithmetic, Proof Theory and Computational Complexity”, P. Clote and J. Krajicek, eds., Oxford Univ. Press, 1993.

2. The balance of the postscript is missing.


Saturday, April 16, 2011

Fermat's Last Theorem









Jack Dikian

April 2011



Fermat's last theorem is a theorem first proposed by Fermat in the form of a note scribbled in the margin of his copy of the ancient Greek text Arithmetica. In 1637 he famously wrote in the margin of Arithmetica that he had discovered the proof however it was too large to fit in the margin.


I have discovered a truly remarkable proof which this margin is too small to contain”.

Fermat's Last Theorem states that xn + yn = zn has no non-zero integer solutions for x, y and z when n >2.

Despite the efforts of many mathematicians, it took another 350 years for a proof to be developed. The British mathematician Andrew Wiles spent almost 6 years developing a proof that he published in 1993. However, by mid 1993, a bombshell was dropped. Several mathematicians began finding faults in the proof when refereeing Wiles' manuscript. The faults were finally repaired by Wiles and his former student Richard Taylor in late 1994.


The question remains today, had Fermat discovered a proof in the seventieth century. The proof by Wiles and Taylor utilized mathematics not available to Fermat. Ribet's proof of the epsilon conjecture in 1986 accomplished the first half of Frey's strategy for proving Fermat's Last Theorem. Wiles set off to prove the other half.

For example,


  • Taniyama-Shimura conjecture for semistable elliptic curves.

  • Horizontal Iwasawa theory

  • Euler system

  • Selmer groups

  • Modular elliptic curves












Wednesday, April 6, 2011

Learning From Ants


Jack Dikian
April 2011

Ants

We’ve all come across a line of ants seemingly marching to and from their nest and food. Some of us may have even owned an ant farm created between 2 panes of glass. Like all, besides the idea that we don’t want them in our house, we have probably also marveled at their ingenuity and keenness for social behavior in order to survive and thrive.


In fact one of the greatest advantages for ants is their social behaviour - working as a colony with specialized duties, they are more efficient than insects in getting necessary things done.


Communicating and Learning their way to food


Ants form and maintain a line to their food source by laying a trail of pheromone, i.e. a chemical to which other members of the same species are very sensitive. They deposit a certain amount of pheromone while walking, and each ant prefers to follow a direction rich in pheromone. This enables the ant colony to quickly find the shortest route. The first ants to return should normally be those on the shortest route, so this will be the first to be doubly marked by pheromone (once in each direction). Thus other ants will be more attracted to this route than to longer ones not yet doubly marked, which means it will become even more strongly marked with pheromone. Soon, therefore, nearly all the ants will choose this route. Here, ants are using the environment as a medium of communication.


Studying this uncanny skill has enabled researchers to create algorithms aimed to search optimal paths in graphs, and applied to applications such as rerouting traffic in busy communications network.


Us learning from ants

In computing research ant colony optimization algorithm (part of swarm intelligence methods) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Graphs are mathematical structures used to model pair-wise relations between objects in a collection.

Whilst the first algorithm was aiming to search for an optimal path in a graph based on the behavior of ants, this idea has since diversified to help solve a wider class of numerical problems.

Application

The use of the ant colony optimization algorithms has produced near-optimal solutions to the travelling salesman problem.

The travelling salesman problem is one of the most intensively studied problems in computational mathematics – that is finding the shortest route visiting each member of a collection of locations and returning to the start. Even rewards are offered in various forums encouraging solutions to this problem. For example,

In February 2009, Robert Bosch (Oberlin College) created a 100,000-city instance of the travelling salesman problem that provides a representation of Leonardo da Vinci's Mona Lisa as a continuous-line drawing. An optimal solution to the 100,000-city Mona Lisa instance would set a new world record for the travelling salesman problem – and $1000 prize.

Because ant colony optimization algorithm can be run continuously and adapt to changes in real time, this gives them an advantage over simulated information annealing (which all users of the system are permitted to change the system) and genetic algorithms. Network routing, urban transportation systems and resource-constrained project scheduling are areas with an interest in this type of algorithms.

Friday, January 14, 2011

Halting Problem - Turing's proof

Jack Dikian
January 2011


A few days ago, I started reading David Leavitt’s book “The Man Who Knew Too Much – Allan Turning and the invention of the computer – 2007” and almost by coincidence met up with an old Uni friend – recalling the countless hours we spent discussing esoteric problems in philosophy and mathematics. Turning machines was a big favorite – universal, and easy to code. Another was the halting problem. Easy enough to understand, and the proof for its insolvability was provided by Turing in 1936.

I wanted to revisit this problem and present it in a form that avoids the deeper symbolic language of set theory.

In computability theory, the halting problem is a decision problem, deciding, given a program and an input, whether the program will eventually halt when run with that input, or whether it will run forever.

In 1936 Alan Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. It is said that the halting problem is un-decidable over Turing machines.

Formal Convention

The formal representation of decision problem(s) is the set of objects possessing the property in question. The halting set is represented by

K := { (i, x) | program i will eventually halt if run with input x

And this set is recursively enumerable. I.e, there is a computable function that lists all of the pairs (i, x) it contains.

In practice

Programs may contain loops which may be infinite or finite in length. The amount of work done in any program will depend on the input. Programs may also consist of various numbers of loops, nested or in sequence.

Trial solution

We can just run the program with the given input. If the program stops, we know the program stops. But what if the program doesn't stop in a reasonable amount of time, what are we to conclude. We are not able to conclude that it won't stop.

A look at the proof

Suppose you have a solution to the halting problem called H. 
 H takes two inputs:


1. a program P and

2. an input I for the program P.

H generates an output "halt" if H determines that P stops on input I or it outputs "loop" otherwise.

Fig, 1

Prog P ---------->

---------> “Loop” or “halt”

Input 1 --------->

So now H can be revised to take P as both inputs (the program and its input) and H should be able to determine if P will halt on P as its input. Let us construct a new, simple algorithm K that takes H's output as its input and does the following

1. if H outputs "loop" then K halts,

2. otherwise H's output of "halt" causes K to loop forever.

That is, K will do the opposite of H's output.


function K() {

if (H()=="loop"){

return;

} else {

while(true); //loop forever

}

}

Fig 2

----> True ------> Halt

P ------------ H ------> Loop ? ---> False ----- > Loop

----->

Since K is a program, let us use K as the input to K.

Fig 3

----> True ---------> Halt

K ----------- H ---> Loop ---> False --------> Loop

---->

If H says that K halts then K itself would loop (that's how we constructed it).
If H says that K loops then K will halt. In either case H gives the wrong answer for K. Thus H cannot work in all cases.