21 June 2012

AI: size matters

by Will Parker

MIT scientists have discovered that the brain processes objects based on their physical size, with a specific region of the brain reserved for recognizing large objects and another reserved for small objects. The findings could have major implications in robotics and artificial intelligence. The findings appear in the journal Neuron.

Surprisingly, the research is the first to assess whether the size of an object is an important factor in the brain's ability to recognize it. "It's almost obvious that all objects in the world have a physical size, but the importance of this factor is surprisingly easy to miss when you study objects by looking at pictures of them on a computer screen," said Dr. Talia Konkle, lead author of the paper. "We pick up small things with our fingers, we use big objects to support our bodies. How we interact with objects in the world is deeply and intrinsically tied to their real-world size, and this matters for how our brain's visual system organizes object information."

For the study, Konkle and co-researcher Aude Oliva took 3D scans of brain activity during experiments in which participants were asked to look at images of big and small objects or visualize items of differing size. By evaluating the scans, the researchers found that there are distinct regions of the brain that respond to big objects (eg. a chair or a table), and small objects (eg. a paperclip or a strawberry).

Specifically, Konkle said they found a systematic organization of big to small object responses across the brain's cerebral cortex. Large objects, they learned, are processed in the parahippocampal region of the brain, an area located by the hippocampus, which is also responsible for navigating through spaces and for processing the location of different places. Small objects are handled in the inferior temporal region of the brain, near regions that are active when the brain has to manipulate tools like a hammer or a screwdriver.

The study's findings shed new light on how the organization of the brain may have evolved, suggesting that the human visual system's method for organizing thousands of objects is intimately tied to human interactions with the world. "If experience in the world has shaped our brain organization over time, and our behavior depends on how big objects are, it makes sense that the brain may have established different processing channels for different actions, and at the center of these may be size," explained Konkle.

The work could have major implications for the field of robotics and machine intelligence. Currently, most computer vision techniques focus on identifying what an object is without much credence paid to the size of the object. "Paying attention to the physical size of objects may dramatically constrain the number of objects a robot has to consider when trying to identify what it is seeing," said Oliva. "Ultimately, we want to focus on how active observers move in the natural world. We think this matters for making machines that can see like us."

Related:
Discuss this article in our forum
Context And Computer Vision
Robots taught to deceive
Brain treats tools as body parts
Brain scans hint at root of consciousness

Source: Massachusetts Institute of Technology, CSAIL