5 May 2006

Bringing Up BabyBot

By Rusty Rockets

Robot aficionados may recall an earlier article about an android orangutan called Lucy and her Geppetto, Steve Grand. Lucy is an independent venture, and thanks to the vagaries of funding, much of Lucy's life has been spent in Grand's garden shed located in Somerset, England. According to Grand, Lucy's unique development into maturity makes her one of the most advanced Artificial Intelligence (AI) research bots in the world. Yes, that's right, maturity. Lucy is gaining her knowledge the same way that a baby human would: by experiencing it directly with both mind and body. This method, claims Grand, is the only way that AI resembling human consciousness can be built into a machine.

Grand's methodology is in stark contrast to many other AI researchers who have expended a great deal of effort cramming prefabricated "knowledge" and "facts" into machines in the hope that they will ultimately end up mimicking human intelligence. Results from this top-down approach range from patchy to downright disappointing, so it's no surprise that a growing number of AI researchers are now looking more seriously at Grand's bottom-up approach. One of these new initiatives - called BabyBot - from the Laboratory for Integrated Advanced Robotics at Italy's Genoa University (LIRA), could mean that Grand's Lucy may soon have a new playmate.

BabyBot is modeled on a two-year-old child's body (perhaps ToddlerBot would have been more appropriate) and has been designed to help researchers develop a better understanding of how human perception works. The researchers believe that this knowledge will eventually lead to machines that can interact, learn and perceive their environment; and ultimately lead to a thinking machine in the true sense of the word. Even more ambitiously, researchers are hoping that the BabyBot project might unlock some of the mysteries associated with human consciousness.

This sort of approach gets the thumbs-up from Grand who is unshakable in his belief that human consciousness develops in parallel with our senses experiencing the world as we grow. "I think this is the only way to do AI. The day we're born we start to thrash our arms about, and we quickly discover that this is correlated with a change in our visual sensations. From this we learn hand-eye coordination and discover the difference between 'me' and 'the world' and from those small discoveries all art, literature and science is born. Everything we do as an adult is built upon those sensorimotor foundations and it seems to me you can't shortcut the process."

One of the first things BabyBot's creators are attempting is a model of the human sense of "presence" or self-awareness that consists of a complicated interaction of senses such as sight, hearing and touch. If AI such as this were to be developed successfully it could lead to some incredible applications in robotics, say the researchers. "Our sense of presence is essentially our consciousness," said Giorgio Metta, a researcher at LIRA and project coordinator for the Artificial Development Approach to Presence Technologies (ADAPT), of which LIRA is an initiative.

We may take it for granted, but everything that we remember about a particular experience, either pleasurable or painful, is the result of many sensory inputs acquired over relatively long periods. Our brains are uniquely wired over our lifetimes as a result of these sensory experiences, and this uniqueness is verified by the way that we remember those experiences, and how those experiences influence our expectations. Two people experiencing the same event will often recall or describe that event quite differently, because each party makes different associations to that event based on previous experiences. As such, some researchers believe that getting to the heart of consciousness is unlikely, as quantifying it could be nigh on impossible. This is commonly referred to as the qualia problem.

While neuroscientists are beginning to understand how the brain functions, thanks to techniques like MRI, the processes behind consciousness remain a mystery. Our level of understanding in regard to human consciousness is still so vague that a great deal of it remains in the realms of philosophy. But the LIRA team are not blind to the difficulties associated with their ambitious quest for knowledge. "The problem is duality, where does the brain end and the mind begin, the question is whether we need to consider them as two different aspects of reality," says Metta.

Despite these concerns, the LIRA researchers ploughed ahead by approaching consciousness like they would any other engineering problem. While neuroscientists ascribe theory, engineers develop understanding through building, a technique known as synthetic methodology. "We took an engineering approach to the problem, it was really consciousness for engineers," says Metta, "Which means we first developed a model and then we sought to test this model by, in this case, developing a robot to conform to it."

The first stage of the process was to model the aspects of a biological system, so the team got developmental psychologists to put 6 to 18 month-old infants through a battery of tests. "We could control a lot of the parameters to see how young children perceive and interact with the world around them. What they do when interacting with their mothers or strangers, what they see, the objects they interact with, for example," said Metta.

The team then developed a process model of consciousness, where objects in the environment are not considered real until they have been perceived via the newly developed system. This is better explained by the order of events through which a real infant experiences its environment. A real infant does not, as do many current AI models, interact through perception, cognition then action, as that implies some level of prior knowledge already inherent in the infant. When an infant perceives and learns, they are actually following a pattern of action, cognition and then perception.

The LIRA model also covers expectations based on what has already been perceived. This is analogous to a child being burnt for the first time and then knowing not to touch fire or they will be burnt again. Muscle reflex is also conditioned this way, responding according to what the eyes perceive, or responding with certain movements that may have become automatic.

But right now, this is all theory, and only future tests will reveal if the model works. "It's not validated. It's a starting point to understand the problem," said Metta. At this stage, however, the team is just happy that they could get BabyBot to learn to separate objects from the background, in addition to learning to ascertain the physical properties of perceived objects using only a minimal set of instructions. This seemingly simple, yet highly developed, procedure clears the way for the next step of actually grasping an object.

Despite the detailed planning LIRA has undertaken, Grand has some reservations about their approach. "The risk is that the engineering approach they are taking may be too abstract too soon. We only know of one kind of machine that can do this sort of thing - the mammalian brain - and we have absolutely no idea what computational principles it uses. So abstractions, particularly those based on algorithms and traditional neural networks, may fail to capture the essence of sensorimotor learning as animals do it. A common mistake in AI is to assume that the model is the thing being modeled, which more often than not just leads to a pale imitation of real intelligence. It's possible that the way the brain does it is the only way it can be done, or at least is a very different way from the way a scientist might assume it's done," said Grand.

Grand posits that maybe there is really only one way that intelligence can be produced, and only an organic brain can achieve that process. Perhaps true human intelligence could be developed in a machine, but only if its perception is developed from scratch. In this regard, maybe we have already learnt something from Lucy and BabyBot, in that understanding consciousness and qualia might just be a red herring. If intelligence and consciousness can only ever be developed from an initial state, body and mind in unison, then what else is there to know? Perhaps mammalian consciousness is an absolute that cannot be broken down. "Nevertheless," says Grand, "[the BabyBot project] is a good start. Maybe they'll get lucky, but I wonder if more attention should have been paid to real biology."

Watch some videos of BabyBot

Pics courtesy of the LIRA Lab