Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Thursday, August 5, 2010

Quantum-connected computers overturn the uncertainty principle


Introduction


Back in the mid eighties when completing a pure mathematics degree and using the then state of the art mini-computers, the PDP11 family of processors (we couldn’t afford the services of the more, much more, brawny nitrogen-cooled Crays to solve sparse Hadamard matrices - I contributed an article in a UK computer journal discussing the future of computers, artificial intelligence, human interfaces and visualization – I guess you can call it a naive attempt to predict how systems will/MAY evolve. Of course this was through my own lens of experience. Watching processor speeds rapidly increasing and memory, disk and everything else growing almost exponentially.

Of course the implicit question was - If all that has happened in the first 50 years of computer history, what will happen in the next 50 or so years?

Moore's Law is an empirical formula describing the evolution of processors which is often cited to predict future progress in the field, as it's been proved quite accurate in the past: it states that the transistor count in an up-to-date processor will double each time every some period of time between 18 and 24 months, which roughly means that computational speed grows exponentially, doubling every 2 years. As processors become faster the science of computability, amongst other things describes a class called 'NP-hard problems' which are also sometimes referred to 'unacceptable', 'unsustainable' or 'binomially exploding' whose complexity and therefore computation grow exponentially with time.

An example of NP-hard algorithm is the one of finding the exit of a labyrinth: it doesn't require much effort if you only find one crossing, but it gets much more demanding in terms of resources when the crossings become so large that it becomes either impossible to compute because of limited resources, or computable, but requiring an unacceptable amount of time.

Many, if not all, of the Artificial Intelligence related algorithms are extremely demanding in terms of computational resources because they are either NP-hard or involve combinatorial calculus of growing complexity.

Not all developments in processing architecture stem from a single genesis. For example, recently, IBM researchers have made huge strides in mapping the architecture of the Macaque monkey brain. They have traced long-distance connections in the brain - the "interstate highways" which transmit information between distant areas of the brain. Their maps may help researchers grasp how and where the brain sends information better than ever before, and possibly develop processors that can keep up with our brain's immense computational power and navigate its complex architecture.

Artificial intelligence and cognitive modeling try to simulate some properties of neural networks. While similar in their techniques, the former has the aim of solving particular tasks, while the latter aims to build mathematical models of biological neural systems.

Another trajectory – that of Quantum Computers

The uncertainty principle is a key underpinning of quantum mechanics. A particle's position or its velocity can be measured but not both. Now, according to five physicists from Germany, Switzerland, and Canada, in a letter abstract published in Nature Physics(1) quantum computer memory could let us violate this principle

Paul Dirac who shared the 1933 Nobel Prize in physics with Erwin Schrödinger, "for the discovery of new productive forms of atomic theory” provided a concrete illustration of what the uncertainty principle means. He explained that one of the very, few ways to measure a particle's position is to hit it with a photon and then chart where the photon lands on a detector. That gives you the particle's position, yes, but it's also fundamentally changed its velocity, and the only way to learn that would consequently alter its position.

That's more or less been the status quo of quantum mechanics since Werner Heisenberg first published his theories in 1927, and no attempts to overturn it - including multiple by Albert Einstein himself - proved successful. But now the five physicists hope to succeed where Einstein failed. If they're successful, it will be because of something that wasn't even theorized until many years after Einstein's death: Quantum Computers.

Key to quantum computers are qubits, the individual units of quantum memory. A particle would need to be entangled with a quantum memory large enough to hold all its possible states and degrees of freedom. Then, the particle would be separated and one of its features measured. If, say, its position was measured, then the researcher would tell the keeper of the quantum memory to measure its velocity.

Because the uncertainty principle wouldn't extend from the particle to the memory, it wouldn't prevent the keeper from measuring this second figure, allowing for exact, or possibly, for obscure mathematical reasons, almost exact measurements of both figures.

It would take lots of qubits - far more than the dozen or so we've so far been able to generate at any one time - to entangle all that quantum information from a particle, and the task of entangling so many qubits together would be extremely fragile and tricky. Not impossibly tricky, but still way beyond what we can do now.


(1) Nature Physics
Published online: 25 July 2010 doi:10.1038/nphys1734
The uncertainty principle in the presence of quantum memory
Mario Berta, Matthias Christandl, Roger Colbeck, Joseph M. Renes & Renato Renner

Wednesday, March 24, 2010

Parenthesised Road Of Lisp



Jack Dikian

December 1998

Introduction

Back in the 1980’s the author was involved in writing a formal (limited) grammar to support the development of an “expert” system to deal with the diagnosis and configuration of Unix file systems.

When reviewing a number of artificial (programming) languages we become interested in LISP. We realised that it’s strong expressive language for stating computational linguistics and flexible data constructs might meet the needs of yet to be fully defined project.

We were particularly interested in manipulating symbols (words, parts of speech) and structured objects (sequences) and perform express operations on them without having to worry about how these “natural” concepts are actually represented in the machine. Also, the ability for routines to call themselves with no restrictions as found in the recursion techniques of LISP.

Our need to understand LISP took us down a very parenthesised road. This is a very brief introduction of what we saw in LISP.

What is LISP

Lisp is a programming language developed as a mathematical notation for computer programs, greatly influenced by the work of Alonzo Church's lambda calculus and become, pretty much, the programming language staple for artificial language research. Lisp derives its name from LIST Processing and it is no surprise that linked lists is one of LISP’s key data constructs. LISP provided a basis or became a genesis for significant concepts in early computer science thinking. LISP’s ability to type data objects instead of variables gives it the flexibility to provide native support for arranging data into a hierarchy based on subset relationship.

Linked lists are one of Lisp’s major data structures, and Lisp source is itself made up of lists. As a result, Lisp programs can manipulate source as a data structure, giving rise to both grammar parsing and even creating new syntax. The inter-changeability of source (formal grammar defining language) and data gives LISP its unique syntax. All source is written as s-expressions, or parenthesized lists. A call to a routine is written as a list with the operator's name first, and the arguments following; for instance, a function F that takes 2 arguments might be called using (f x y).


Why LISP

The biggest advantage LISP has over other languages such as C, C++, and PASCAL is due to the idea that LISP follows the philosophy that what's good for the language's designer is also good for the language's users. Consequently, the programmer, unlike in other languages, can add features to the language making it a self-hosting language. LISP tends to provide a more transparent map between ideas about how the program functions and its source. This is therefore ideal for situations where problems are partially specified and/or where problems whose nature is not fully understood at the outset. Approximation can be developed and refined interactively over time.

Also where a problem involves multiple representations of data from varying sources, LISP's flexible data abstraction system makes configuring robust systems simpler.

The Parentheses ( )

For people like me who was used to looking at C, Pascal, and even Fortran. LISP was initially confusing. Its syntax was nothing like I had seen before (perhaps with the exception for APL which was itself mathematically inspired and served as a executable notation for mathematical algorithms). Parentheses are the basic building blocks of LISP. All functions, operations, and scripts take place within parentheses. Complete functions have a matching number of open and closed parentheses.

When merging lists

(APPEND `((A B)(C D)))

Answer (A B C D)


Creates a function called ADDIT that takes two parameters x and y and adds them.

(DEFUN ADDIT (x y) (+ x y))

(ADDIT 3 4)
Answer 7

Creates a function for calculating powers

(defun expon (base powr)
(cond ((zerop powr) 1)
((evenp powr) (expon (* base base ) (truncate powr 2)))
(t (* base (expon (* base base) (truncate powr 2))))))