15 February 2010
Multiplexing technique promises quantum leap in camera performance
by Kate Melville
Researchers have developed a way of capturing high-resolution still images alongside very high-speed video - a technology based on multiple exposures that effectively turns the camera's single CCD image sensor into hundreds of virtual cameras. The technology has been patented by Isis Innovation, the University of Oxford's technology transfer office, which provided seed funding for this development.
Interestingly, the breakthrough came from a medical research team. Dr Peter Kohl and his colleagues at the University of Oxford study the human heart using imaging and computer technologies. They've previously created an animated model of the heart, which allows one to view the heart from all angles and look at all layers of the organ, from the largest structures right down to the cellular level. This is done by combining many different types of information about heart structure and function. It requires a combination of speed and detail, which has been difficult to achieve using current photographic techniques.
"Anyone who has ever tried to take photographs or video of a high-speed scene, like football or motor racing, even with a fairly decent digital SLR, will know that it's very difficult to get a sharp image because the movement causes blurring. We have the same problem... where we may miss really vital information like very rapid changes in intensity of light from fluorescent molecules that tell us about what is happening inside a cell. Having a massive 10 or 12 megapixel sensor, as many cameras now do, does absolutely nothing to improve this situation," said Dr Kohl.
Kohl explained that co-researcher Dr Gil Bub came up with the idea to bring together high-resolution still images and high-speed video footage - at the same time and on the same camera chip. Traditionally, cameras that could do this were expensive, specialist devices, but Dr Bub's innovation does so at a fraction of this cost.
"What's new about this is that the picture and video are captured at the same time on the same sensor," said Dr Bub. "This is done by allowing the camera's pixels to act as if they were part of tens, or even hundreds of individual cameras taking pictures in rapid succession during a single normal exposure. The trick is that the pattern of pixel exposures keeps the high resolution content of the overall image, which can then be used as-is, to form a regular high-res picture, or be decoded into a high-speed movie."
Dr Bub explained that the technique works by dividing all the camera's pixels into groups that are then allowed to take their part of the bigger picture in well-controlled succession, very quickly, and during the time required to take a single 'normal' snapshot. "For example, if you use 16 pixel patterns and sequentially expose each of them for one sixteenth of the time the main camera shutter remains open, there would be 16 time points at which evenly distributed parts of the image will be captured by the different pixel groups. You then have two choices: either you view all 16 groups together as your usual high-resolution still image, or you play the sixteen sub-images one after the other, to generate a high-speed movie," he elaborated.
The research may soon move from the optical bench to a consumer-friendly package. Dr. Mark Pitter from the University of Nottingham is planning to compress the technology into an all-in-one sensor that could be put inside normal cameras. "The use of a custom-built solid state sensor will allow us to design compact and simple cameras, microscopes and other optical devices that further reduce the cost and effort needed for this exciting technique. This will make it useful for a far wider range of applications," he said.
Source: Nature Methods