23 November 2005

Brain Begins To Reveal Its Codes

by Kate Melville

Neurons in the last purely visual brain region, the inferotemporal (IT) cortex, respond selectively to different images. As pictures were randomly presented to the monkey during specific intervals (top), neurons at different sites in IT produce distinct patterns of activity to each picture (bottom). For example, neurons at site 1 favor the toy and the yam, while neurons at site 3 prefer the monkey face and the cat. Combining just 100 recording sites is enough to provide highly accurate information about the picture and category that both a simple classifier and downstream neurons in more cognitive brain regions can decode. 
Despite what sci-fi movies would have us believe, scientists cannot interact directly with the human brain because they don't understand enough about how it codes and decodes information. But in a new study published in Science, neuroscientists at the McGovern Institute report that they have been able to read out a part of the visual system's code involved in recognizing visual objects.

Deciphering the brain's coding mechanisms is something that many believe is crucial to truly understanding the nature of intelligence. "We want to know how the brain works to create intelligence," said McGovern researcher Tomaso Poggio. "Our ability to recognize objects in the visual world is among the most complex problems the brain must solve.

Computationally, it is much harder than reasoning. Yet we take it for granted because it appears to happen automatically and almost unconsciously."

Poggio explained how, in a fraction of a second, visual input about an object runs from the retina through increasingly higher levels of the visual stream, continuously reformatting the information until it reaches the highest purely visual level, known as the inferotemporal cortex (IT). This part of the brain provides key information relating to identification and categorization to other brain regions, such as the prefrontal cortex.

To further understand how the IT cortex represents that information, the researchers trained monkeys to recognize different objects grouped into categories, such as faces, toys, and vehicles. The images appeared in different sizes and positions in the visual field. Recording the activity of hundreds of IT neurons produced a database of IT neural patterns in response to each object.

The researchers then used a computer algorithm, which they called a "classifier" to decipher the codes. The classifier was first "trained" to associate each object - a monkey's face, for example - with a particular pattern of neural signals. Once the classifier was sufficiently trained, it could be used to effectively decode new neural activity patterns. Astonishingly, the classifier found that even just a split second's worth of the neural signal contained enough specific information to identify and categorize the object, even at positions and sizes the classifier had not previously "seen."

Poggio said it was surprising that so few (only several hundred) IT neurons, for such a short period, contained so much precise information. "If we could record a larger population of neurons simultaneously, we might find even more robust codes hidden in the neural patterns and extract even fuller information."

The research has many potential real world applications, such as artificial visual systems for security scanners or an automobile pedestrian alert system.

Source: McGovern Institute for Brain Research, MIT
Pic courtesy McGovern Institute for Brain Research, MIT