Your presentation covers an interesting and perhaps unusual topic regarding bio-inspired methods; could you elaborate on what this really means and what your presentation will cover?
Studying the functioning of biological eye has led to rethinking the way dynamic visual information is acquired and processed. Biological eyes are based on an array of Autonomous photoreceptors that independently and in continuous-time sense the scene in view. As a result, the biological retina is continuously driven and controlled by what is happening in the visual scene and not, like image sensors, by artificially created timing control signals (i.e. frame clock) that have no relation whatsoever to the visual information to be acquired. Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer being imposed externally to an array of pixels but the decision making and acquisition control is transferred to the single pixel that handles its own visual input individually.
The presentation will report on a novel species of vision devices that completely overthrow the conventional paradigm of acquiring dynamic visual information. These devices, inspired by the biology, no longer acquire a sequence of snapshots (frames) but generate a continuous-time stream of information from an array of autonomous pixels where each pixel independently adapts (and tries to optimize) its acquisition process to the visual input it receives. As a result, these sensors (1) are able to suppress data redundancy (inherent to the every frame-based acquisition process) and, for the first time, break the strict relation between speed and data rate. The asynchronous pixel circuits allow to (2) acquire scene dynamics at temporal resolutions equivalent to tens, sometimes hundreds, of kilo-frames per second, and (3) at a dynamic range exceeding the one of conventional CMOS imagers by orders of magnitude. In particular, the approach allows to some extent to combine these performance parameters and, as a result, enable e.g. high-speed real-time computer vision under uncontrolled lighting conditions.
Would you consider speed, DR and power efficiency to be the biggest limitations for image sensors?
Certainly not for all image sensors and imaging applications; e.g. image sensors for digital photography or for shooting video / film for human observers (i.e. actually the majority of image sensors) don't have that strong requirements on acquisition speed or very high dynamic range operation. Also they don't want to remove data redundancy between frames but acquire as faithfully as possible images of the scene.
However, the mentioned parameters (especially in combination) are crucial for computer vision applications where the extraction of information from a (fast changing) dynamic scene is key. In particular real-time high-speed vision applications relying e.g. on object detection and recognition, optical flow computation or stereo reconstruction, or applications in resource-limited environments (e.g. low-bandwidth video streaming, sensor networks, IoT) can greatly profit form the event-driven approach. Typical application fields thus include automotive (e.g. ADAS, self-driving cars), robotics (e.g. moving robot or drone navigation), industrial automation, aerospace/defense and IoT.
Can your bio-inspired method address other limitations or is it specific to the above mentioned in your presentation title?
The possibility of combining the three parameters of temporal resolution (speed) dynamic range and sparse data encoding (redundancy suppression / data compression) at the sensor level is the unique characteristic of the bio-inspired, event-driven approach to visual data acquisition.
Who are you most looking forward to meeting at the conference and why?
Meeting some of the technical leaders in the field and hearing about the latest developments.
Join Christoph Posch and many other industry experts at Image Sensors Europe - Register your place here today!