26 September 2011
Videos reconstructed from brain scan
by Kate Melville
Using MRI brain scans, computational models and a large quantity of YouTube videos, University of California, Berkeley researchers have demonstrated how people's dynamic visual experiences can be reconstructed. The image at right shows the source video image and the reconstructed image. The videos themselves can be seen on YouTube.
Reporting on their work in the journal Current Biology, the researchers say it is a major step toward reconstructing internal imagery. "We are opening a window into the movies in our minds," said neuroscientist and study co-author Jack Gallant. Importantly, the technology can only reconstruct movie clips people have already viewed. However, Gallant believes the work paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories.
Previously, Gallant and fellow researchers recorded brain activity in the visual cortex while a subject viewed black-and-white photographs. They then built a computational model that enabled them to predict with accuracy which picture the subject was looking at. In their latest experiment, the researchers say they have solved a much more difficult problem by actually decoding brain signals generated by moving pictures.
Co-researcher Shinji Nishimoto said that decoding video from the brain was a time consuming business, with volunteers having to remain still inside the MRI scanner for hours at a time. While in the scanner, they watched two separate sets of Hollywood movie trailers.
The MRI scanner was used to measure blood flow through the visual cortex. Computationally, the cortex was considered to be made of small cubes known as volumetric pixels, or "voxels." "We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity," Nishimoto explained.
The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding some 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a reconstruction of the original movie.
Ultimately, Nishimoto said, scientists need to understand how the brain processes dynamic visual events that we experience in everyday life. "We need to know how the brain works in naturalistic conditions," he said. "For that, we need to first understand how the brain works while we are watching movies."