18 August 2006

A Portrait Of The Artist As A Tin Man

By Rusty Rockets

Of all the things that make us human, the characteristic that probably best defines humanity is the tendency toward artistic expression. But it has perplexed scientists as to how artistic creativity is generated in the brain, is expressed in its various forms, or how to agree upon what products of this creativity deserve to be called "art". The conundrum deepens further when scientists try to engender artificial intelligence (AI) with creativity, as it seems like trying to put the cart before the horse, or running before you can walk, or... well, you get the idea. If we can't understand our own creativity, how can researchers ever expect a robot to create something that we think might be art?

A more macabre approach to robot art involves hooking-up living cells to some off-the-shelf electronics and letting it loose with a pencil or brush. Such works raise poignant questions about the human mind and where creativity may come from, and have of course also raised a number of ethical concerns. Nonetheless, creative bots of all sorts have stirred up a lot of interest, and for a number of years now there have been international competitions, exhibitions and academic seminars that feature creative robots and "their" work. What is interesting about the robot artist movement - other than the blurring of what is art and what is an artist - is that as they become more sophisticated, they will raise questions about our fundamental human qualities that will need to be re-examined.

You might associate a robotic artist with advanced AI, but one of the initial "artbots" was an elaborate fusion of electrically stimulated rat neurons controlling high-tech, colored marker-pen wielding claw thingies attached to a robotic arm; a blend of metal and biology not unlike the Daleks from Dr Who. The development of this rat neuron powered bio-bot named MEART (multi-electrode array art) - Semi Living Artist was spread across multiple geographical locations. The cultured rat nerve cells reside in the Georgia Institute of Technology's neuro-engineering lab, which communicate in real time with the robotic arms located at the University of Western Australia (UWA) in Perth, Western Australia. The bio-bot's "brain" and "body" communicate via 60 two-way electrodes, and neural signals are recorded and sent to a computer that translates neural activity into robotic movement.

The project was developed and hosted by SymbioticA - The Art & Science Collaborative Research Lab, housed at UWA, whose work raises important questions about creativity, what it is, and deeper questions about brain function. "We're attempting to create an entity that over time will evolve, learn, and express itself through art," said Professor Steve Potter at Georgia Tech. SymbioticA's Guy Ben-Ary, who directs the Image Acquisition and Analysis Facility in the School of Anatomy and Human Biology at Western Australia, explained that: "The goals are both to learn more about how brains work and to apply what is learned to designing fundamentally different types of artificial computing systems."

The SymbioticA team has continued to develop MEART - The Semi Living Artist in an effort to suggest: "future scenarios where humans will create/grow/manufacture intuitive and creative "thinking entities" that could be intelligent and unpredictable beings." During Perth's annual arts festival, ARTRAGE, earlier this year, the SymbioticA team had people lining up to have their portraits drawn by MEART. The same Perth-Atlanta rat neuron link was used, but this time MEART was given an "eye" in the form of a video camera. Through its eye, MEART could capture and analyze the faces of those having their portrait drawn. MEART produced an image comprised of the cross-hatching not unlike its earlier scribbles, but this time the drawings had more form. In trying to remain true to their initial goal, the researchers recently said of the MEART project that they are: "interested in the possibility of emergent behavior, of creating an 'artist' rather than an 'artwork'."

UWA's Ben-Ary commented that the project was a "cultural experiment," and considered it highly significant that MEART was only ever initiated while in the presence of an audience, as though this may have some bearing on the bio-bot's artistic behavior. Subsequently, Kate Vickers, a researcher and artist working on the project, claims: "perhaps it is through the audience that behavior emerges that is linked to and yet independent of the project collaborators' intentions." Gimmicky and a tad pretentious perhaps, but the comments do draw attention to the fact that humans evolve according to their environment, and express themselves via a complex network of life experiences within that environment.

Intuitively we feel it is our life experience that compels us to create and express otherwise inexplicable feelings, and to show others these personal perspectives on life. Even if a robot creates something that looks like art, what is it that a robot could possibly be expressing unless it has the ability to appreciate visceral experiences the same way that a human can? In this respect, it would seem that our ability to "create" is unique, as a robot - even one rigged up to living cells - would need to evolve without any human intervention before it could take full credit for its artistic creations. But just how would perceptions of ourselves change if robots could learn about and interact with their environment? What if they could exhibit autonomous and artistic behaviors we thought to be unique only to humans? Such robots (would that be the appropriate term?) may arrive sooner than you think, as robotics researchers from a number of institutions are taking AI in radical new directions.

The Max Planck Institute for Biological Cybernetics has been working on a Bayesian Approach to Cognitive Systems (BACS), where researchers are investigating the extent to which Bayes' theorem can be used in artificial systems capable of managing complex tasks in a real world environment. The Bayesian system was developed by mathematician Thomas Bayes, an 18th century Presbyterian monk, and it describes alternatives for calculating the chances of an event occurring using unconditional probability. "It is a model for rational judgment when only uncertain and incomplete information is available. Bayes' theorem is applicable to all questions relating to learning from experience," say the team, headed by a Professor Siegwart of the Eidgenössische Technische Hochschule Z�rich. The team hopes that this method will lead to robots that are "capable of handling incomplete information, analyzing their environment, acquiring context-specific knowledge, interpreting the data and, together with humans, taking decisions." In short, analyzing situational data as a human would.

Researchers at the Institute of Cognitive Science and Technology in Italy have taken AI development a step further, and claim to have devised a way in which a machine can: "evolve and develop by themselves without human intervention." The new technology involves the incorporation of robotics, linguistics and biology, with the team claiming that it has allowed researchers at Sony's Computer Science Laboratory in France to add a new level of intelligence to the AIBO (Artificial Intelligence roBOt) dog. "What has been achieved at Sony shows that the technology gives the robot the ability to develop its own language with which to describe its environment and interact with other AIBOs - it sees a ball and it can tell another one where the ball is, if it's moving and what color it is, and the other is capable of recognizing it," project coordinator Stefano Nolfi said.

The research team explains that the AIBOs were programmed to look for new and more challenging tasks to complete, and that they could negotiate between each other what they would call a newly discovered object, just as small children do. The team appears to have developed a system whereby a robot has an open-ended learning capacity, which includes curiosity, grammar construction, problem solving and the ability to call it a day when a task seems fruitless. "This is a project with a big impact. We've managed to ground AI in reality, in the real world," says Nolfi. Where such technologies will lead is anybody's guess, but it would seem that if a robot can analyze, interpret and figure out ways of expressing that environment, then autonomous robotic creativity could be just around the corner. The androids in Bladerunner had their memories implanted so that they would believe that they were human, while the real robots of the future will be able to uniquely express their own. But what do we call these entities if they share the same defining qualities that make us human?

Check out MEART- The Semi Living Artist.