2019 Agenda

The hottest topics in digital imaging technology

Focusing on Vision and Action Within Image Sensors, this year's Image Sensors Americas program featured two days of presentations and panel discussions around the latest leading-edge sensor technology, current capabilities of image sensors, machine learning and deep learning, AI and more.

October 15, 2019 | Day 1

Registration Opens & Continental Breakfast

  1. Registration Opens & Exhibit Hall Opens

  2. Welcome and Opening Remarks

    Conference Producer, Smithers

Session I: Sensor Market & Trends

Presentations in this session will discuss current image sensor technology and trends. Experts in image sensor technology will provide an overview of the current technical trends driving the image sensor industry and provide a market update.

  1. Trends in Machine Vision, Surveillance, and Mobile Phone Markets

    Ron Mueller | CEO of Vision Markets of Associate Consultant of Smithers Apex

    The presentation provides an overview of the 2018 and 2019 revenues of key players in three major target markets of image sensors. The developments in the surveillance and mobile phone markets are indicated by market-leading public companies. In the heavily scattered supplier landscape of Machine Vision with few listed companies and the manifold and diffuse target markets, trends and future outlooks are derived from key players in the target markets and macro-economic forecasts.

  2. Breakthrough Processor for Intelligent Vision Devices

    Dr. Ren Wu | Founder & CEO of NovuMind

    In the era of AI, the next grand challenge, especially from industry’s perspective, is to deploy the capable AI into everywhere advancing humankind. To take this challenge, we would have to deal with the next level of complexity of AI. Performance, cost, and power-efficiency all become very important consideration. In this talk, Dr.Ren Wu will share his vison of the breakthrough chip architecture and how NouMind’s technologies is line up to address the challenge in intelligent vision devices market.

  3. Towards Large-Scale, SPAD-Based ToF Imagers for Automotive, Robotic and EdgeAI Applications

    Wade Appelman | VP of Sales and Marketing of SensL Technologies

    The acquisition of depth information via LiDAR sensing is a now a priority for global brands covering a broad range of use cases. While each use case has a different set of requirements, every LiDAR system features the same key functional blocks. Perhaps the most critical of these functional blocks is the sensor, which needs to be capable of detecting very low signal returns in the presence of a large amount of ambient light. This talk will give an overview of the functional blocks of the LiDAR system, with a focus on the sensor options. SiPM and SPAD arrays have been proven as critical components for extending the achievable range and resolution capabilities of LiDAR systems. The Pandion 400x100 pixel SPAD array – the first from ON Semiconductor and largest commercially available SPAD array - will be presented including system-level imaging results.

  4. Networking Break

Session II: Exploring Potential and Solutions

Presentations in this session will showcase innovative image sensor technology and how these innovations support various sensor applications.  

  1. CMOS Image Sensors for Bio-medical, Industrial and Scientific Applications: Current Challenges and Perspectives

    Renato Turchetta | CEO of IMASENIC Advanced Imaging S.L.

    IMASENIC is a young, dynamic, innovation-driven start-up which is developing custom CMOS image sensors for the bio-medical industrial and scientific markets. Low noise, large dynamic range, large area, large pixel counts are some of the parameters for which continuous improvements are required and on which IMASENIC is making advances. Its latest developments include a high-dynamic range custom CMOS image sensor for X-ray digital imaging, achieving linear, 16-bit dynamic range. The talk will present this and other sensor and review them against the state of the art.

  2. Near Field Depth Generation From a Single Image Sensor

    Paul Gallagher | Vice-President of Strategic Marketing of Airy3D

    A3D has developed a transmissive diffractive interference filter based on adding an additional mask step after microlens deposition to generate an angular modulated signal on top of the traditional intensity signal the sensor captures.  Near field depth details of the scene are generated inline of the video stream, using minimal processing, based on the angular modulated signal.  The implementation requires no change in the industrial design of the module or lensed, still enables the capture of the 2D scene details, in color or monochrome, and can be applied to nearly all CMOS image sensors.  This discussion will provide an overview of the technology, initial details on performance, and impact on 2D image quality.

  3. 4D Imaging Point Cloud Capabilities for Automotive

    Raviv Melamed | CEO & Co-Founder of Vayyar Imaging

    In June 2019, Vayyar announced the first ever automotive 4D point cloud application on a single radar chip. In his talk, Raviv Melamed, co-founder and CEO of Vayyar, will discuss exactly what 4D point cloud capabilities look like and how the technology will transform radar sensors by constructing a real-time, high-resolution 4D visualization of a car’s environments - including both inside the cabin and outside the vehicle. Vayyar predicts that some of its point cloud abilities, such as detecting if an infant or pet has been left in a car unattended, will soon be requirements for all automobiles. These sensors can also be easily integrated into existing automotive framework, reducing the overall cost and number of sensors needed for a vehicle, and can solve many safety issues plaguing cars today, including driver drowsiness, breathing and posture detection, lane switching assistance, automatic speed and distance control, and more. While there is an increasing need for data and other information, there are concerns about a personal privacy breach. These sensors are tackling that challenge with ease, as unlike cameras or other optics, they do not capture one’s personal identity.

    Attendees will leave the session with a greater understanding of 4D imaging capabilities, and how these sensors are at the forefront of innovation by disrupting a variety of industries, including healthcare, retail, smart home, robotics and more.

  4. Networking Lunch

  5. Ge-on-Si ToF Imager Sensor SoC

    Neil Na | Co-Founder and Chief Science Officer of Artilux

    Germanium-on-silicon (GoS) lock-in pixel technology has been recently studied and shown with the potential to out-perform its silicon counterparts. In this talk, we will discuss the modeling, fabrication, and characterization results of a GoS lock-in pixel. Then, we will showcase the world-first GoS time-of-flight (ToF) imager sensor system-on-chip (SoC) in a backside-illuminated (BSI) configuration. Unique features such as invariance to sunlight exposures and compatibility with eye-safe lasers are demonstrated through a compact module.

  6. Pixelated polarizing filters as an enabling technology for cost-effective remote sensing in challenging imaging applications

    Matt George | Applications Research Scientist of MoxTek

    Moxtek has utilized nano-imprint lithography to improve our ability to provide pixelated, multi-state polarization filters and other nanostructured optical elements for division of focal plane image sensor applications.  In addition, Moxtek has recently created the capability to assemble these filters directly onto sensors arrays for custom imager designs and prototyping.  The technology is used in imaging polarimetry for remote sensing and dynamic interferometry, with applications in autonomous vehicles, environmental monitoring, industrial inspection, and defense.  The advantages will be discussed for imaging in low-light conditions, through fog, smoke, and haze, and in scenes where removal of glare or careful discrimination of specular vs. diffuse reflection and degree of linear polarization are needed, such as water-covered roads, oil spill detection, and facial recognition.

  7. Networking Break

Session III: Advancing Image Sensor Technology

  1. Presentation TBD

    Dr. Daniel Van Blerkom | CTO & Co-Founder of Forza Silicon

    Abstract to come

  2. Sensor x DNN

    Tomoo Mitsunaga | General Manager of Sony

    Recent dramatic evolution of image understanding and machine vision technologies has been made by deep neural networks (DNNs) and huge amounts of computing power. For now, the evolution is extending to the edge of the information network where there are a vast numbers of sensors working with the sensor signal processing. The presenter introduces a survey on the recent DNN-based approaches for sensor signal processing and summarizes what we expect from this evolution

  3. High Performance BSI Global Shutter Sensor Technology

    Zhiqiang Lin | Director of Characterization of OmniVisionTechnologies Inc

    Global Shutter(GS) image sensor is necessary to capture non-distorted image required by most industrial, broadcasting and machine vision applications. Back Side Illumination (BSI) technology has been applied to rolling shutter sensor for a decade while most of GS sensors are still Front Side Illuminated (FSI) due to its better light shielding capability for shutter efficiency requirement. This talk will give an overview of GS technology and a BSI GS sensor with 2.2um pixel size from OmniVision which utilizes multiple advanced process technologies achieving much higher performance and smaller pixel size than existing FSI GS sensors. Sensitivity, MTF, FWC, Shutter Efficiency, and noise will be compared to a FSI GS sensor in the presentation. Finally, the pixel size scalability and extra functions and applications enabled by this technology platform will be discussed too.

  4. Networking Reception

October 16, 2019 | Day 2

Registration and Welcome

  1. Registration

  2. Welcome and Opening Remarks

Session IV: Next Gen Image Sensor Technology

The presentations in this session will explore the potential of image sensors as they pertain to advances in AI Deep Learning Computer Vision etc.

  1. Deep Image Processing

    Vladlen Kolton | Director Intelligent Systems Lab of Intel

    Deep learning initially appeared relevant primarily for higher-level signal analysis, such as image recognition or object detection. But recent work has clarified that image processing is not immune and may benefit substantially from ability to reliably optimize multi-layer function approximators. I will review a line of work that investigates applications of deep networks to image processing. First, I will discuss the remarkable ability of convolutional networks to fit a variety of image processing operators. Next, I will present approaches that replace much of the traditional image processing pipeline by a deep network, with substantial benefits for applications such as low-light imaging and computational zoom. One take-away is that deep learning is a surprisingly exciting and consequential development for image processing.

  2. Sensor Modeling and Benchmarking — A Platform for Sensor and Computer Vision Algorithm Co-Optimization

    Andrew Berkovich | Research Scientist of Facebook Reality Laboratory

    We predict that applications in AR/VR devices, autonomous vehicles, and other intelligence devices will lead to the emergence of a new class of image sensors — machine perception CIS (MPCIS). This new class of sensors will produce images and videos optimized primarily for machine vision applications, not human consumption. Unlike human perception CIS, where the ultimate criterion is visual image quality, there is no existing criterion to judge MPCIS sensor performance. We discuss a full stack sensor modeling and benchmarking pipeline (from sensors to algorithms) that could serve as the platform for performance evaluation and algorithm optimization. We illustrate how sensor modeling and benchmarking help us understand the complex system trade-offs and dependencies between sensor and algorithm performance, specifically for simultaneous localization and mapping (SLAM).

  3. Robots and AR glasses: Synergy in Living Room using Computer Vision, Machine Learning and Cloud

    Vitaliy Goncharuk | CEO/Founder of Augmented Pixels

    Robotics industry is actively creating the most affordable unified platforms (hardware + software), based on which it will be possible to create a variety of household robots — from children toys and vacuum cleaners, to “robots that deliver inside the house” and robots-assistants for the elderly.AR glasses master essentially the same space. Companies like Apple need a cheap, but at the same time high-quality set of sensors and algorithms that would allow to use glasses in a living room. Although Robots and AR glasses are two very different fields, they use basically very similar components to solve the same critical tasks in mapping (AR Cloud), indoor navigation, re-localization and inside-out tracking — SLAM algorithms and hardware (mono / stereo camera + imu).

  4. Networking Break

  5. Topic to be Confirmed

    Gloria Putnam | Marketing Director Americas of GPixel

    abstract to come

  6. Imaging Sensors and Systems for a Genomics Revolution

    Tracy Fung | Sr. Staff Engineer, Product Development CMOS Lead of Illumina

    abstract to come

  7. Far-Infrared thermal camera an effortless solution for improving ADAS detection robustness

    Emmanuel Bercier | Strategy and Automotive Market Manager of ULIS-SOFRADIR

    The advantages of using far-infrared technology in guiding systems, warning systems or surveillance systems have been demonstrated in defense applications through a large span of imaging functions such as detection, target recognition and positive identification in complex environments.

    Used as a complementary tool in a set of ADAS sensing technologies, far-Infrared thermal camera provides the key sense to improve system performance and increase detection robustness. This enhanced capability relies on the detection of the unique thermal signature of a pedestrian, an animal or any obstacle in any weather and light conditions, day or night, without being glared by the sun or any light source. Implementing ADAS based on far-infrared solution into an autonomous vehicle level 3 and above can reduce the rate of false positive detection. Being an imaging technology, far-infrared camera using ULIS technology can be easily integrated into standard ADAS platform while minimizing computation resources required by detection algorithms thanks to the thermal signature detection.

  8. Networking Lunch

Session V: Academia and Research

  1. Polarization-Sensitive Imaging Arrays: Considerations for High Performance Applications

    Dmitry Vorobiev | Research Scientist of Laboratory for Atmospheric and Space Physics

    The polarization of light can be probed to obtain insights into light-matter interactions (the geometry of the last scattering surface, for example) that may not be available from spectral or time-of-flight analysis. Imaging sensors with on-chip micropolarizer arrays have dramatically lower the cost of entry for polarimetry, making imaging polarimetry especially affordable and convenient. However, the division-of-focal plane approach employed by these sensors demands careful application design, to mitigate the mechanisms which lead to increased measurement uncertainty during the modulation and de-modulation stages of the analysis process. I will discuss the benefits offered, and challenges posed, by these polarization sensors and show examples of terrestrial and astronomical applications, including polarimetry of the solar system planets and the 2017 total solar eclipse.

  2. Detecting and measuring cubesats and space debris using small telescopes, sCMOS detectors, and GPUs

    Peter C. Zimmer, Mark R. Ackermann, and John T. McGraw of J.T. McGraw and Associates, LLC

    As of April 2019, there were 2,062 operational satellites worldwide; approximately 10 times that number of pieces of tracked debris 10 cm in diameter or larger; and an estimated 200-300 times more untracked pieces of 1 cm or larger. Any of these bits, should they impact, carry enough kinetic energy to wreck a satellite, creating a direct loss and generating even more debris. With several planned and already launching megaconstellations, the space environment is set to get even more complicated than it already is, and that isn’t counting the geopolitical ramifications. Earth’s orbital space is already an incredibly valuable economic and strategic resource, and like any such resource, it requires wise stewardship, the first step of which is finding out what is there. More and more, small optical telescopes are contributing to this effort, though some challenges persist in their widespread use. They must be physically spread around the globe, and they operate best when sited away from cities and towns that have infrastructure such as power, people and internet access – such as dry, remote mountain tops. The sCMOS cameras on these systems create massive amounts of image data, up to several TB/day; therefore, each system must be able to analyze and detect satellites and debris in very near real time. GPUs have unlocked the potential of these systems, enabling necessary TFlops of processing power to make this practical. We will discuss how we’ve optimized the optics, sCMOS sensors and GPUs to create a robust data stream currently in operation and building global coverage.

  3. Presentation to be Confirmed

  4. Closing Remarks and Farewell Advisory Board