Coronavirus global event updates, find out more

Conference day 2 - 12 March 2020

Conference day 2

Thursday 12 March

  1. Delegate sign-in and morning refreshments

  2. Chair's opening remarks

ADVANCES IN IMAGE SENSOR TECHNOLOGIES

Chaired by Robert K. Henderson, Professor of Electronic Imaging, University of Edinburgh

  1. Economic trends in the global vision markets

    Ron Mueller | CEO of Vision Markets of Associate Consultant of Smithers

    This presentation will deliver findings from the continuous monitoring of major enterprises who either have image sensors integrated in their products (mobile phones, automobiles, surveillance cameras, machine vision components) or use imaging to manufacture their products (semiconductors, drugs, pharmaceuticals, food, etc.). The analysis of the manufacturing sectors and currency exchange rates of USA, China, and the Eurozone will explain  differences in the economic developments and an outlook for 2020.

  2. High Dynamic Range imaging - a practical state-of-the-art comparison

    Dana Diezemann | Strategical Product Manager of ISRA Vision AG

    The trend towards Autonomous Driving in Automotive is pushing technological advances in High Dynamic Range (HDR) imaging. Autonomous cars need to rely on their visual perception of the road course, other road users, and traffic signs etc. at all times. Image sensors and thus automotive cameras often face scenes of mixed brightness, such as entries and exits of tunnels, bridge underpasses, or low frontal sunlight. To capture all required image details in very dark and very bright areas, cameras need to show High Dynamic Range (HDR) performance.

    Besides all other autonomous vehicles, multiple other applications can build upon the all-new designs of lenses, pixels, readout, and image signal processing in modern Automotive Vision, such as outdoor surveillance, the inspection of complex shapes and shiny surfaces in industrial automation or melt-pool monitoring in 3D printing. This presentation will introduce the state of the art in High Dynamic Range imaging based on a practical comparison of the latest automotive image sensors from Sony, ON Semi, OmniVision, and ST Microelectronics, showing their individual advantages in theory and real-life performance.

  3. Large-format SPAD cameras (Delivered remotely via Zoom)

    Prof. Edoardo Charbon | Chair of VLSI of EPFL

    SPAD technology has brought new perspectives to various applications, owing to its single-photon sensitivity, picosecond photoresponse, and low dark noise. Implementation of large-scale SPAD arrays with miniaturized pixels is advantageous in those applications to achieve higher spatial resolution, higher dynamic range, and more sophisticated data processing. We present the design challenges in the miniaturized SPAD pixels, as well as current and future applications of large-format SPAD cameras.

    • Miniaturized SPAD pixels: analysis of corresponding scaling laws, manufacturing trends, technology and physics limitations, and experimental measurements on very small pitch SPAD pixels.
    • Large-format SPAD cameras: architectures, trends, performance evaluation and uniformity.
    • Time-of-flight application of large-format SPAD cameras.

    Authors: Edoardo Charbon, Kazuhiro Morimoto, Claudio Bruschini

LOWER-POWER SESSIONS

Chaired by Robert K. Henderson, Professor of Electronic Imaging, University of Edinburgh

  1. Real-time & power-efficient AI close to the sensor

    Ramses Valvekens | Managing Director of easics

    How to develop small, low-power and affordable AI engines that run close to your sensors? Designers search for embedded AI solutions that integrate tightly with the sensors such as image sensors, LiDAR, Time-of-Flight, ... A flexible framework is used to automatically generate hardware implementations of the deep neural networks. This scalable AI engine for FPGA and ASIC is ready for the future. A talk for image sensor manufacturers that add AI in their products.

  2. Networking break

3D SENSING DEVELOPMENTS

Chaired by Anders Johannesson, Senior Expert - Imaging Research and Development, Axis Communications

  1. Time-of-Flight is here to stay, but where is it going? (Delivered remotely via Zoom)

    Dr. Bernd Buxbaum | CEO/CTO, Founder of pmdtechnologies, Germany

    In his presentation Dr Bernd Buxbaum, CEO at pmdtechnologies ag will give a deep dive into the future developments of 3D depth sensing with special attention to the Time-of-Flight technology and the pmd principle which he co-developed 20 years ago. After a small lookback to the early developments and breakthroughs that paved the way for today's technological advances, he will focus on the three different markets Consumer, Industrial and Automotive Exemplarily he will give a detailed overview on how the integration of 3D-Cameras has enhance products and applications in these markets. Further, he will give an outlook into the future of Time-of-Flight depth sensing, technological developments, current challenges and its potential for major markets and products.

  2. High-speed single-point d-TOF sensor for industrial applications (Delivered remotely via Zoom)

    Matteo Perenzoni | Head of IRIS Research Unit of Fondazione Bruno Kessler (FBK)

    Single-point dTOF sensors are now the main commercial application of CMOS SPADs, started with consumer modules for proximity/autofocus. However, industrial applications require more robustness, speed, and precision. Here, a CMOS sensor for single point dTOF is presented, embedding a procedure for on-the-fly identification of the laser spot position, and a programmable ROI connected to several TDCs. It can reliably measure up to 3m with <1.6mm precision and 7klux background, at 1kHz rate.

  3. Device/system technologies of a long range- and high resolution direct Time-of-Flight (DTOF) system based on a vertical avalanche photodiode (VAPD) CMOS image sensor (Delivered remotely via Zoom)

    Yutaka Hirose | Manager of Panasonic Corporation

    We present a long range (>250m), high resolution (10cm), direct Time-of-Flight (DTOF) system based on a vertical avalanche photodiode (VAPD) CMOS image sensor. Starting with the device design, a unique operation mode of relaxation quenching during Geiger-mode pulse generation enabling one to detect single photon with high stability and pixel circuits realizing high precision photon counting are described. Then, we introduce design and performances of the DTOF system based on the Sub-Ranges Synthesis (SRS) method. Its solid capability of long-ranging and subrange manipulation techniques that directly affect speed, depth resolution and versatility of the system are discussed.

  4. Quanta Image Sensor characterization and denoising algorithms (Pre-recorded presentation)

    Dr. Dakota S. Robledo | Senior Image Sensor Scientist of Gigajot Technology

    The quanta image sensor (QIS) is a novel CMOS-based image sensor capable of photon counting without avalanche gain. A QIS is implemented in a state-of-the-art backside-illuminated (BSI) CMOS image sensor commercial stacking (3D) process. The detector wafer consists of a megapixel array of specialized active pixels known as jots and the ASIC readout wafer utilizes cluster-based readout circuitry. This specialized pixel architecture and cluster readout circuits allow for the QIS to achieve accurate photon-counting capabilities with average read noise of 0.23e- rms, average conversion gain of 345µV/e-, and average dark current less than 0.1e-/sec/jot at room temperature. We will present the experimental characterization of this developed photon-counting quanta image sensor (QIS) camera module under a wide range of temperature. The experimental data from this sensor closely matches what is expected from theory for photon counting sensors based on photon arrival Poisson statistics. By using new denoising algorithms that take advantage of the unique statistical properties of these photon counting jots, accurate image reconstruction can be performed at light levels as low as 0.3 photoelectrons/jot/frame.

  5. Networking break

NEW TECHNOLOGY

Chaired by Anders Johannesson, Senior Expert - Imaging Research and Development, Axis Communications

  1. Active 3D semantic camera (Pre-recorded presentation)

    Raul Bravo | President, Co founder of Outsight

    Hyperspectral remote sensing refers to remote spectral detection of light, reflected or scattered from a target. Each pixel of a hyperspectral imager can contain hundreds of spectral channels, as opposed to the traditional three-color RGB cameras. Hyperspectral cameras are limited in the accuracy of the spectral signal since any variation in the illumination spectrum translates into a misinterpretation of the target response. We'll introduce a new Active 3D Semantic Camera (hyperspectral).

    • A new kind of sensor, bundling the best of Laser and RGB imaging into a single device.
    • Single-Sensor Hyperspectral measurements, hundreds of meters away, become possible thanks to a new broadband laser and an original sensor architecture that minimises cost (single detector) and size.
    • Embedded 3D processing allows multi-dimensional actionable data: depth, material ID, colour, full velocity vector per point, point-wise classification & SLAM on Chip.
  2. Nanomaterial-based broadband imager

    Dr. Tapani Ryhänen | CEO of Emberion Oy

    We present Emberion’s novel photodetector technology that converts light to an electronic signal using graphene-enhanced transducers with nanocrystal light absorbers. Our technology development results in an affordable high performance detector suitable for visible to SWIR (400 nm-2000 nm) imaging applications. Additionally an imager concept that works for ultra-broadband solutions i.e. 400 nm - 14 um (visible to LWIR) is presented.

  3. Dynamic PhotoDiode: the breakthrough light-sensing technology

    Dr. Denis Sallin | Director of Engineering of ActLight SA

    In his presentation, Dr. Dennis Sallin - Director of Engineering at ActLight  SA, will discuss with the audience the unique features of the patented Dynamic PhotoDiode (DPD), its unique operating principles and its beneficial impact on the performance of light sensing applications. Moreover, Dr. Sallin will give an overview of the current status of the DPD technology development, its future directions and its potential for main stream applications and products.
     

  4. Chair's closing remarks and close of 2020 conference