Coronavirus global event updates, find out more

Interview with Dr David Stork of Rambus Labs

Dr David Stork, Fellow and Research Director of the Computational Sensing and Imaging Group at Rambus Labs spoke exclusively to Smithers Apex ahead of his presentation " Lensless ultra-miniature computational sensors and imagers: using computing to do the work of optics" at Image Sensors 2015

Q: Please briefly describe your background and experience in digital imaging.

A: I have long been interested in vision, perception, pattern recognition and related fields—both by humans and by machines—and my research career in these areas began when I wrote my B.S. thesis on human and machine color vision in the MIT Physics Department under the guidance of Edwin Land, president of the Polaroid Corporation.  As a graduate student I co-authored a leading textbook, Seeing the light:  Optics in nature, photography, color, vision and holography, and I’ve taught numerous courses on this material, as well as advanced pattern classification and computer vision at several leading colleges and universities, most recently Stanford.  Several years ago I founded a corporate research group on computational imaging and co-developed deep methods for jointly designing lensed optical systems and the digital algorithms that processed the sensed images.  Three years ago I moved to Rambus Labs where I founded its Computational Sensing and Imaging group, where our first two projects are binary pixel image sensors (presented at Image Sensors US 2014) and lensless smart sensors.

Q: You will present some unconventional lensless imaging technology you are currently working on. In basic terms, how do we capture images without a lens?

A: The Rambus lensless smart sensors consist of special optical phase gratings affixed to standard CMOS image sensor arrays.  Light from the scene diffracts through the grating and yields an apparently chaotic and unintelligible blob on the sensor array—appearing nothing like a familiar image in a traditional camera.  The grating preserves nearly all the visual information from the scene, up to the two-dimensional Nyquist limit, and thus the blob on the sensor can be processed with special algorithms to yield the final digital image.  In short, the grating produces an image blob on the sensor array that is meant for computers, not humans and unlike in a traditional camera, in our lensless smart sensor the image is computed using special algorithms.  I’m particularly intrigued and encouraged by our most recent work on designing gratings and subsequent digital processing to address particular image sensing tasks such as image change detection, visual flow estimation, face detection, object tracking, people counting, and so on.

Q: What are the ideal applications for this technology? What are the performance limits?

A: When one thinks of imaging in the natural world, the high acuity of the eagle, human, large cats and so forth spring to mind, but the overwhelming majority of natural visual systems are of much lower resolution (amphibians, fish, rodents, most birds, etc.), or are special purpose, such as the famous special-purpose fly detecting by the frog’s visual system.  There are myriad applications for small, flat, inexpensive sensors and imagers, even if they have limited spatial resolution, such as adding simple vision to mobile devices and appliances, toys, in the internet of things; simple human interfaces based on gestures or other visual information; biomedical sensing such as endoscopy; automotive sensors for collision avoidance; surveillance, and more.  I’ve often said that some of the best applications for our technology will be identified by others who have unique requirements hard to meet using traditional sensing methods.  Understanding and quantifying the performance limits of our approach is a subtle and fascinating task since it involves the physics of photon Poisson shot noise, the optics of manufacturable phase gratings, the electrical engineering of sensors, circuits and ADCs, and the statistical signal processing and information theory of image computation.  We are studying the performance tradeoffs in size, pixel pitch, computational cost, effective resolution, and so on, and expect to start presenting our results within the year.  Several facts are clear, though:  our resolution is lower than that typically discussed at Image Sensors—thousands or tens of thousands of effective pixels, not millions or billions.  Moreover, because of our sensor small size, we do not collect much light compromising our low-light sensitivity.  We are confident that there are many large application areas for our sensors, even given these constraints. 

Q: Is this technology fully commercialised at this point? If not, what is the anticipated timeline?

A: Our technology is not commercialized yet, but we’re pushing hard to get our systems into the hands of collaborators and potential customers as soon as possible.  

Q: Finally, we are very pleased to have you on board as a speaker for the conference, please let us know what you are hoping to gain from your participation?

A: I’m eager to see several colleagues and experts from Europe and elsewhere, to learn about the state of the art and future trends in sensors that we might incorporate into our research and development roadmaps, and (of course) to present our work and evangelize this new approach of computational imaging.