3 April 2012

"Super-Turing" AI gets development funding

by Will Parker

University of Massachusetts Amherst computer scientist Hava Siegelmann has received funding to develop the first ever "Super-Turing" computer. Based on analog recurrent neural networks, Siegelmann says the device will usher in a level of intelligence not seen before in artificial computation.

Mathematician and alpha-geek Alan Turing (pictured) set out the basis for the digital computers we use today back in the 1930s. In 1948, he proposed a concept for a different type of computing machine that would use what he called "adaptive inference" in its computations. Turing predicted this kind of computation device would mimic life itself, but he died without developing the concept further.

Now, Siegelmann, an expert in neural networks, wants to evolve Turing's ideas into reality. She and research colleague Jeremie Cabessa are working at putting what she has dubbed Super-Turing computation into an adaptable computational system that learns and evolves. She says the system will use input from the environment in a way quite different from classic Turing-type digital computers. The current issue of Neural Computation contains details of her work.

"This model is inspired by the brain," she explained. "It is a mathematical formulation of the brain's neural networks with their adaptive abilities. Each time a Super-Turing machine gets input it literally becomes a different machine... If you want a machine to interact successfully with a human partner, you'd like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That's what this model can offer."

Each step in Siegelmann's Super-Turing model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, 0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. Siegelmann says her most recent analysis shows that Super-Turing computation has 20, possible behaviors. "If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe," she enthused.

Siegelmann claims the Super-Turing machine will not only be flexible and adaptable, but also economical. When presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, she contends, is another hallmark of high artificial intelligence.

Related:
Discuss this article in our forum
Demonstration of memflector brings brain-like computing a step closer
Cheap-and-cheerful memristor tech set to spur AI research
Bringing Up BabyBot
Brain Begins To Reveal Its Codes

Source: University of Massachusetts at Amherst