2016 Agenda

Day 1: October 25, 2016

Image Sensors Americas 2016 | Day 1

  1. Registration | Morning Coffee| Exhibit Hall Opens

  2. Opening Remarks from Conference Organizers

  3. Keynote: The Prospects for Building Image Sensors in America

    Paul Gallagher | Senior Director of Technolog of LG Electronics

    The prospects for building image sensors, for the current consumer markets, in America are depressing.  Fortuitously the image sensor space is entering the 4th disruption in its evolution.  The last 3 disruptions primarily focused on taking ‘pretty pictures’ for human consumption, evaluation, and storage.  The disruption coming will focus on moving machine vision to main stream.  Smart homes, offices, cars, or devices, AR/MR, biometrics and crowd monitoring all need to run image data through a processor without human viewing to activate actionable responses.  The opportunity this presents is massive, but as the growth efficiencies come into play the solutions will become specialized.  This discussion will review the opportunities, why the entrenched players may not be operating at so great an advantage, and what will be needed to take advantage of the emerging shift within imaging.

Low Light Imaging

Advances in image sensor performance, particularly decreased noise for video is enabling applications in scientific and security, both commercial and national security. This session will discuss what is possible now and what the application require going forward.

Session Chairs: Daniel McGrath,  Chief Engineer for Low Light Level Imaging, BAE Systems

  1. Camera Selection for Low Light Imaging: Choosing the Right Camera Based on an Informed Analysis of

    James Butler | Manager of OEM Camera Products of Hamamatsu Photonics

    The purpose of this presentation is to discuss the challenges of low light imaging, provide a comparison of sensor and camera technologies used in low-light imaging applications and review the application results and SNR evaluation

  2. Image Sensors for Security, Wearable and Consumer Electronic Products: The Race to Low Light Performance

    Patrice Roulet | Director of Engineering and Co-Founder of Technology of Immervision

     The race for the highest number of megapixel’s resulted in sensors with smaller and smaller pixel pitch which reduces performance in challenging and varying lighting conditions. Currently, we can see lenses F# are getting lower, and sensor pixel pitch becoming bigger and bigger – this points towards the recognition and the need for good image quality in challenging light conditions especially when it comes to 360° imaging and immersive applications. Join me to explore how image optimisation/quality is key in challenging light condition, as well as examining various 360° image sensor use cases - a good 360° image starts with a good 360° lens.

  3. Will a Solid-State Imager Replace the Image Intensifier Tube for Night Vision?

    Dr. Thomas L. Vogelsong | Principal of Imaging Innovations

    Solid-state image sensors have experienced great improvements in performance over the past 10 years, especially in the area of low-light imaging. However, the image intensifier tube remains the primary device used in the reflective band for night vision devices. Advantages and disadvantages of various low-light imaging devices, including CCD, EMCCD, CMOS, image intensifiers, EBAPS, and others will be presented. The advantage of combining digital image enhancement with low-light video will be demonstrated. The results will suggest a technology path forward for night vision systems.

  4. Networking Refreshment Break

  5. Efficient and Compelling High Dynamic Range Imaging

    Timo Kunkel, Ph.D. | Senior Color and Imaging Researcher of Dolby Labs, Inc

    The field of High Dynamic Range imaging or ‘HDR’ was coined over 20 years ago. Over time, various building blocks have been designed that are suitable to form perceptually correct, artistically compelling and technologically efficient HDR imaging systems. Now, as those technologies are starting to be implemented in mainstream devices, it is important to identify and keep track of several key perceptual and technological concepts in order to avoid pitfalls that can impact image fidelity when processing, transmitting, and displaying HDR imagery

Film Based Imagers

This session will cover new approaches to image sensor technology.  Quantum dots and large-scale plastic printing are poised to push volume production of image sensors to a whole new level.

Session Chairs:  Suraj Bhaskaran, Engineering Director, Thermo Fisher Scientific

  1. Fueling an IoT-Driven Future: The QuantumFilm Imaging Platform

    Remi Lacombe | VP Sales and Marketing of Invisage

    QuantumFilm is a quantum dot-based photosensitive film engineered for integration on standard CMOS image sensor wafers. In combination with its high sensitivity, which is tunable for specific wavelengths of light, QuantumFilm’s electronic global shutter allows near-infrared sensors to perform outdoors in bright sunlight while saving as much as 50X in system power. These advantages make QuantumFilm ideal for IoT applications involving active illumination such as authentication, autonomous devices, and augmented reality. InVisage is producing QuantumFilm image sensors in quantity, in addition to enabling customer innovation through QuantumFilm platform partnerships. 

  2. Flexible Image Sensors Printed on Plastic

    Carlo Guareschi | Vice-President of Business Development of ISORG

    The presentation describes the breakthrough technology developed by ISORG and his partners by which large area, flexible image sensors are manufactured on a plastic substrate.  Numerous applications can be enabled by the technology such as lightweight, flexible X-Ray Imagers, large-area Biometric Sensors, absence/presence detection for optimized inventory/object management.  Sensors are manufactured with a printing process whose cost increases linearly with surface, making large area devices highly cost competitive

  3. Networking Lunch

Scientific Imaging

Scientific imaging needs are very different from the requirements of conventional imaging applications.  This session covers novel techniques used for scientific imaging ranging from hardware to software.

Session Chairs:  Paul Gallagher, Snr Dir Technology and Product Planning,  LG Electronics

  1. Bio-inspired Event-based Sensor Technology for Computer Vision

    Christoph Posch | Co-Founder and CTO of Chronocam

    The traditional frame-based method of acquiring visual information that is used in practically all machine vision systems for capturing and understanding dynamic scenes is proving inadequate for a new generation of vision-enabled systems. With the increasing abundance of vision applications on diverse platforms from smart mobile devices to moving objects like cars, planes or drones, state-of-the-art machine vision must address a new range of requirements that demand innovative approaches. Faced with limited bandwidth and power resources in these news systems, conventional approaches using frame-acquisition video sensing and subsequent video compression are falling short due to the mode of operation of the sensor and the computational power needed for compression.  The advent of bio-inspired event-driven vision is driving a paradigm shift in machine vision, leveraging the asynchronous acquisition and processing of visual information. This method is driven only by the relevant events happening in the scene, resulting in an almost time-continuous but very sparse stream of events carrying the full visual information. The now-proven benefits of a neuromorphic approach compared to conventional frame-based technology include increased speed, ultra-high dynamic range, savings in computational cost and improved robustness. As a result, demanding machine vision tasks, such as real-time 3D mapping, complex multi-object tracking or fast visual feedback loops for sensory-motor action run at kilohertz rates on cheap, battery-powered processing hardware and “always-on” visual input for user interaction and environmental context awareness on smart mobile devices. This paper will present the concept of neuromorphic event-driven vision, a brain-inspired way of acquisition and processing of visual information for computer and machine vision.

  2. Imaging beyond the Bandgap: A 200 GHz Silicon CMOS Imager

    Kenneth Fourspring | Senior Image Scientist of Harris Corporation Geospatial Systems

    A small format 0.35 micron room-temperature THz imager was fabricated in a standard Silicon CMOS process. An array of test structure variations were fabricated and characterized to obtain an optimized pixel structure. Band optimized antennas  fabricated in situ couple the THz energy into the pixel structure. Low temperature tests were performed to determine the THz response as a function of Temperature and to isolate the THz photocurrent.

Medical Imaging

Imaging for medical applications have expanded significantly since the invention of CMOS-based image sensors.  This session looks at innovative approaches and products developed by various companies to tackle some of the most challenging needs of medical imaging.

Session Chairs:  Paul Gallagher, Snr Dir Technology and Product Planning,  LG Electronics

  1. CMOS Image Sensors Open Up New Options in Medical Imaging

    Axel Krepil | Sensor + Division of Framos

    This presentation will provide an overview of the global medical image sensor offerings and the latest requirements of the medical market addressing high-end endoscopes, disposable endoscopes and microscopes. Due to benefits from image sensor developments for smart phones and due to SONY’s end-of life of CCD technology, endoscope manufacturers are boosting their efforts to launch new device generations until 2020. Especially the CMOS sensors’ much smaller size, lower power consumption and cost combined with acceptable image quality help to develop numerous new tools, devices and business models to serve the medical market from diagnostics to surgery. 

  2. Networking Break

  3. Miniature Form Factor Camera Modules for Medical Endoscopy

    Martin Waeny | CEO of AWAIBA

    Small form factor camera modules have become ubiquose due to the boom in portable device applications. Those technological developments have made it possible to spin of minimal form factor products optimized for medical endoscopy, replacing fibre bundle endoscopy with smaller instrument diameters or higher image quality. The application of “chip on the typ” miniature camera modules also allows to drive cost down and enables “disposable” one time use endoscopic equipment. A trend to allow higher patient safety from cross contamination with infections by the equipment and which allows to embed visualization with tools formally used blind.

    This talk gives an over view over full wafer level integrated technologies to realize sub 1mm camera modules for medical endoscopy as well as optimized camera modules with round shape. Further future technological requirements and trends are discussed.

Opening Networking Reception

  1. Evening Reception

Day 2: October 26, 2016

Image Sensors Americas 2016 | Day 2

  1. Registration | Morning Coffee| Exhibit Hall Opens

  2. Opening Remarks from Conference Organizers

Multi-spectral and Hyperspectral Imaging

Imaging is not just the visible or non-visible domain, interesting and innovative applications are using a much broader spectrum of light energy to enhance the performance and identify opportunities.  This session will look both at some of the opportunities that present themselves when you spread the spectrum to include both visible and non-visible light, as well as some of the pitfalls to take into account for your design.

  1. Optical Solution for NIR Crosstalk in RGBir Hybrid Sensors

    Rich Hicks | Senior Camera and Imaging Technologies of Intel

    A variety of usages requiring cameras to be responsive to Near Infra-Red (NIR), including biometric authentication, are coming to market increasing the desirability of combining NIR and visible imaging in a single, hybrid sensor.  One challenge faced by those attempting this combination is leakage of NIR energy into the color channels, which degrades the image quality of the visible image.  Rather than focusing solely on correcting this in post processing, we have analyzed the end-end optical system and worked with a major color filter supplier to develop and productize a new filter material that resolves the NIR leakage optically enabling a high quality RGB and NIR hybrid camera.

  2. ALCHERA: a hyperspectral architecture that enables ubiquitous imaging spectroscopy

    Alex Hegyi | Research Staff of PARC, a Xerox Company

    The implications of a truly ubiquitous hyperspectral imaging technology are profound and cut across many domains of human activity including agriculture, medicine, defense, and food security. Until now, the cost, size, and complexity of hyperspectral imagers and data analysis have prohibited the realization of this scenario. PARC’s revolutionary ALCHERA system (Actuation of Liquid Crystal for Hyperspectral Remote Analysis) has the potential to democratize hyperspectral imaging technology; it employs polarization interferometry through a liquid crystal cell to transform any imager into a hyperspectral sensor with full visible and near-infrared (400 nm – 1000 nm) spectral coverage. The resulting system is compatible with the size and cost requirements for inclusion in a mobile phone, and with its dynamic software configurability of spectral resolution, spatial resolution, and imaging speed, the system can enable truly portable, general purpose hyperspectral imaging.

  3. Using Advanced Imaging to Optimise Supply Chain Processes: The Beef Example

    Alexandra Booth | Research Scientist of ImpactVision

    ImpactVision have conducted a pilot study using advanced imaging technology to assess the quality attributes of beef meat based on the parameters of pH, Lab color space and tenderness. In order to interpret the quality parameters, we used hyperspectral imaging technology and took images from 400nm - 2,300nm with several hundred different bands.  By training a neural network with "ground truth data" we were able to get r-square values of over 0.9 for predicting pH-value and L* and a*. For tenderness we got over 0.7. The pilot study also shows that it's possible to look in the VNIR-spectrum for these quality attributes, as opposed to going further up into infra-red light.  Partners in the food industry are interested in using contactless and rapid measurements to determine the different quality attributes of food, to optimise supply chains and reduce waste, and we are currently calibrating our software with different breeds of cattle, for analysis.

  4. Networking Refreshment Break

Designing Always on Solution (IOT)

When IoT is augmented with sensors and actuators, the technology becomes an instance of the more general class of cyber-physical systems, which also encompasses technologies such as smart grids, smart homes, intelligent transportation and smart cities.

Session Chairs:  Paul Gallagher, Snr Dir Technology and Product Planning,  LG Electronics

  1. Image Sensor Needs: A Customer’s Perspective

    Scott Campbell | V.P. of Research of GoPro

    In this presentation we explore and discuss the complex trades that must be made in architecting a modern imaging system and give special attention to the role the image sensor plays as a vital link in that chain.  We work to approach this exploration from both the consumer and professional perspectives to give breadth and relevance to the topics presented.  Finally, we present some customer asks for desired new and/or improved capabilities from the image sensor.  

Imaging for a New Reality

Session Chairs:  Paul Gallagher, Snr Dir Technology and Product Planning,  LG Electronics

  1. The Eyes Have It

    Peter Milford | CTO of EyeFluence

    Eye interaction technologies complement augmented and virtual reality HMDs. Wearable eye tracking uses include foveated rendering, inter pupilary distance (IPD) measurement, gaze tracking and user interface control. One, two or more cameras per eye can be used. I will review eye interaction uses and associated system requirements.

  2. Networking Lunch

Mobile Imaging

The needs for mobile imaging are very unique.  Imaging systems designed for mobile applications have to comply with the challenging space and power constraints set forth.  This session will cover topics related to consumer mobile imaging systems.

 Session Chairs:  Paul Gallagher, Snr Dir Technology and Product Planning,  LG Electronics

  1. IEEE CPIQ Standard for Measuring and Benchmarking Image Quality in Consumer Mobile Devices

    Jonathan Phillips | Staff Image Scientist of Google

    Image processing has strong influence on the visual appearance of sensor output in consumer devices.  In fact, the camera tuning process can generate significantly different image output, even for products with the same sensor.  This, among other factors, has resulted in a market with camera phones that range in image quality, such as from low to high noise and from soft to strong sharpness in the final images.  The IEEE P1858 Standard for Camera Phone Image Quality provides a standardized procedure for measuring such differences and predicting the image quality impact on the consumer photographs from a mobile device.  Results from a set of various camera phones will be presented and described.

  2. Mobile Imaging

    Sean Kelly | VP of Imaging of Motorola

    Solving for image quality and thinness in today’s mobile imaging platforms has led to significant image sensor performance improvements in addition to imaging system level design improvements in lenses and image processing.  Natural imaging physics trade-offs non-the-less plague these small imaging systems and usually induce fundamental trade-offs between image quality and phone camera size.  The addition of features like true optical zoom to this already challenging trade-space has proven too challenging for much of the industry.  We present our approach to changing how to add significant optical zoom to the mobile platform.  

  3. Networking Refreshment Break

Advances in Image Processing for Growing Applications

This session will highlight the advances in image processing for growing applications such as auto, camera technology and various optical designs.

Session Chairs:  Paul Gallagher, Snr Dir Technology and Product Planning,  LG Electronics

  1. Deep Learning, Driving the next Image Processing Revolution

    Yair Siegel | Director of Product Marketing, Imaging & Vision of CEVA DSP

    • During this presentation you will learn about the benefits and usages of deep learning for embedded cameras, what technologies work in conjunction and how this affects Image sensors as well as discuss the challenges limiting technology breakthrough.
  2. Using Computational Power to Overcome the Hardware Constraints in Optical Designs

    Eugene Panich | CEO, Co-Founder of Almalence

    Digital techniques have played a vital role in camera systems starting at the sensor and all the way to the display.  Optical designs however, remain relatively un-evolved and are still dependent on the optical hardware to define their performance and capabilities.  New digital techniques are emerging to virtualize what previously could only be achieved in glass.  The implications of such developments may set new trajectories for camera designs to reach previously unattainable performance targets and result  in new, lower cost, smaller, lighter, and even collapsible optical systems that don't compromise image quality.

  3. Plotting the Evolution of Image Sensors and Imaging in a Vehicle

    Mark Bünger | VP of Research of Lux Research Intelligence

    Image sensing in vehicles continues to attract high levels of investment globally.  The market for ADAS systems is growing and in light of the growing appetite for and OEMs need to provide autonomous vehicles, image sensors are bound to see growth in the industry. While there has been high levels of interest in imaging for the exterior of the vehicle the inside of the vehicle has largely been ignored from the public discussion. This talk will:

    (1)    Make the case for vehicle sensors – both inside and out

    (2)    Point to gaps in innovation and highlight technology trends for the future.

  4. Methodologies for Increasing Dynamic Range & the Impact on ADAS, Autonomous Driving, Commercial Drones, Intelligent Surveillance and Industrical Applications

    Dr. Igor Vanyushin, Pinnacle Imaging Systems, Inc.

    New methodologies are being incorporated at the sensor level with dynamic readout architectures and utilizing post capture algorithms embedded in logic to vastly improve dynamic range for specific applications, moving above a 120 dB threshold. This presentation provides an in-depth look at these newer architectures and methodologies and the potential impact on learning networks and application areas

  5. Processors for Embedded Computer Vision: Options and Trends

    Jeff Bier | Founder and President of Embedded Vision Alliance and BDTI

    Advances in computer vision algorithms are enabling everyday devices to gain visual understanding – converting pixels into

    insights to improve devices’ autonomy, safety, efficiency and ease of use. But computer vision applications demand lots of processor performance. Delivering the required performance at cost and power consumption levels acceptable for mass-market devices is a significant challenge – especially when coupled with developers’ need for flexible programmability. Processor suppliers have responded with a diverse range of architectures, combining CPUs with GPUs, DSPs, FPGAs, vision-specific processors and neural network engines. In this presentation, Jeff provides a map of the landscape of processor options for vision applications, highlighting strengths and weaknesses of different processor types. He also illuminates important trends in processors for vision processors (such as specialized neural network engines) and associated development tools (such as support for standard APIs).