2017 Agenda

The hottest topics in digital imaging technology

Day 1 | October 12, 2017

Registration and Opening Welcome Remarks

  1. Day One Registration Begins

  2. Welcome & Opening Remarks

Session I: Market Overview

In this session, delegates will hear the latest industry developments and opportunities for image sensors. These presentations will cover the changing market and what opportunities there are for image sensor applications and how the image sensor market is changing to meet these new applications.

  1. Opening Keynote Presentation: Lifestyle Image Sensor Requirements

    Dr. Farhad Abed | Image quality engineer of GoPro

    The growth of CMOS image sensor production demonstrates its demand and presence in vast aspects of our everyday activities. Among a variety of the applications, lifestyle and consumer photography is considered to be the most significant portion of sensor consumption. Cellphones, DSLR/DSC cameras, video camcorders, action cameras, drones, etc. represent capturing systems with the goal of ultimate image quality. However, despite recent technical growth in sensor manufacturing and fabrication, yet the image quality has not reached the consumer expectations. Low-light performance, HDR capability, and higher frame rate are examples of sensor specifications that require further attention. This talk is an aid for sensor manufacturing contributors for prioritizing possible improvements in fundamental, intermediate and advanced image-quality-related attributes and for lifestyle photography.  

  2. Image Sensor Venture and M&A Activity: An Overview of Recent Deals, Trends, And Developments

    Rudy Berger | Managing Partner of Woodside Capital Partners

    The image sensor market is undergoing explosive growth driven, in particular, by the consumer, automotive, and security/surveillance markets.  The rapid adoption of embedded vision technology into all three of these markets is creating new opportunities within the image sensor and embedded camera sectors.  This talk will provide an overview and forecast of market trends in the high volume embedded imaging sectors along with a review of related financial transactions (venture and M&A) in the industry.  The speaker will also discuss a number of public and private companies that are likely to make an outsized contribution to the embedded imaging sector.

  3. Morning Networking Break

Session II: Advancing Image Sensor Technology Improving Precision Performance and Challenges

In this session presentations will cover advances in image sensor technology that are improving the performance of sensors and overcoming challenges.

  1. Image Quality Oriented Sensor Characterization

    Dr. Zhenhua Lai | Imaging Optics System Engineer of Motorola Mobility

    Sensor characterization is important at early stage of camera design. Predicting the impact of imaging sensor to camera performance is challenging. We propose an image quality (IQ) orientated sensor characterization methodology that runs standardized, systematic tests to characterize sensor performance from camera system level to ensure camera IQ at design stage.

    The proposed sensor characterization methodology predicts camera performance under various use cases and is tailored for image signal processor (ISP). The proposed methodology has been integrated in Motorola cell phone camera design process. By selecting the most suitable sensors, we have achieved high IQ without raising camera cost. The proposed methodology is also useful for guiding image sensor design for image sensor vendors

  2. A New Frontier in Optical Design: Segmented Optics Combined with Computational Imaging Algorithms

    Dmitry V. Shmunk | CTO of Almalence Inc

    Advances in both optics free-form manufacturing and computational image processing open venues for unconventional optical designs. The proposed design allows simplified optical elements to be subject to less precise mechanical tolerances yet still produce uncompromised image quality and may be used in both display and capture pipelines. This improvement is achieved by redistributing optical tolerances across all the optical elements and introducing a segmented optical correction element residing nearby the sensor or display component. Furthermore, the resulting image quality is optimised by the use of digital aberration correction algorithms.

    Such a design opens new possibilities for innovative optical systems and solves problems previously preventing their commercial success. For example:

    • optical systems with a reduced number of elements for lighter weight and smaller size yet uncompromised image quality
    • relaxation of tolerances and rigidity of the mounts
    • mechanically foldable optical designs, utilising very thin optical elements
    • high-quality, wide-FoV AR/VR display systems with increased size of eye-box
  3. IR Bolometer Technology

    Patrick Robert | Electronic Design Manager of ULIS

    Thermal sensors capable of detecting the heat emitted by people and objects in the longwave infrared (LWIR) wavelengths keep gaining more and more interest for smart cities, automotive and mobile thermography applications. Micro-bolometer arrays have become the technology of choice for capturing at low cost thermal images with extended dynamic range and resolution from 80x80 to 1280x1024 pixels. Such broad adoption of micro-bolometer-based thermal sensors is made possible by pioneering developments at ULIS on vacuum wafer level packaging (WLP) techniques compatible with emerging mass market volumes as well as novel readout circuit architectures for high dynamic range (HDR) and fast infrared imaging. This presentation will illustrate ULIS’ latest patented WLP, HDR and fast imaging innovations.

  4. Global Shutter vs. Rolling Shutter: Performance And Architecture Trade Off

    Abhay Rai | Director of Product Marketing of Sony Electronics

    This presentation discusses two popular CMOS image sensor implementations and their use in effectively addressing some end applications.

    • Pixel architecture, readout and sensitivity difference
    • Achieving high dynamic range
    • Benefits and limitations
    • Popular use cases and application needs
  5. Enhancing the Spectral Sensitivity of Standard Silicon-based Imaging Detectors

    Zoran Ninkov | Professor in the Center for Imaging Science (CIS) of Rochester Institute of Technology

    Improving the sensitivity of silicon-based CMOS and CCD is an area of ongoing interest, especially in the UV.  Lumogen has been used for this purpose for many years but has several known issues including limitations to its use in both vacuum and radiation harsh environments.

    Quantum Dots (QD) offers a more robust alternative to Lumogen. The fluorescence wavelength of QDs is tunable and can be fabricated to match the peak sensor quantum efficiency. We have used Aerosol jet printing. (AJP) for the deposition of QDs on a variety of substrates and on commercially available sensor arrays. While the films deposited onto various substrates to date have a surface morphology characterized by aggregate formations, the insight obtained in studies to date will lead to much more uniform deposition in the near future. QD coating offer the promise of uniform spectral response over a broad range of UV wavelengths.

  6. Networking Lunch

Session III: Sensor Applications

The possibilities for the use of image sensors are seemingly endless. This session will feature applications that are helping to move the image sensor industry forward. 

  1. TDI Imaging Using CCD-in-CMOS Technology: An Optimal Solution for Earth Observation, Industrial Inspection and Life Sciences Applications

    Arye Lipman | Strategic Alliances Manager of imec

    Given the linear movement of the subject in earth observation, industrial inspection, and certain life sciences applications, time delay and integration (TDI) technique is often used, as it provides an increased signal-to-noise image by oversampling the same scene time-delayed.  CCD imagers are widely used for TDI applications as they combine intrinsic photon detection and noiseless charge transfer, which results in very efficient oversampling when synchronized with the subject’s movement. However, CCD imagers require external drivers and readout electronics. In contrast, CMOS circuits allow a full system-on-a-chip integration – CMOS imagers are highly miniaturized and power efficient imaging devices. Unfortunately, traditional CMOS imagers are not very well suited to perform TDI: they cannot avoid a noise penalty.  In order to realize the best of both worlds, Imec has developed CCD-in-CMOS technology which, in combination with backside illumination, results in TDI imagers featuring high sensitivity, low noise, and low power consumption. The latest results of the Imec’s TDI development will be discussed. Imec partners with companies in machine vision (e.g. flat panel inspection), remote sensing (e.g. small satellites and unmanned aerial vehicles), and life sciences (e.g. DNA sequencing and cell cytometry) to develop custom CCD-in-CMOS TDI solutions.

  2. Semiconductor Sequencing Technology: A Scalable, Low-Cost Approach to Using Integrated CMSOS Sensor Arrays

    Brian Goldstein | Sr. Staff Engineer in Sensor Design Engineering in the Clinical Next-Generation Sequencing Division of Thermo Fisher Scientific

    Given the importance of DNA sequencing to life sciences, biotechnology, and medicine, it is desirable to drive down sequencing cost.  Thermo Fisher Scientific utilizes semiconductor manufacturing techniques to create a sequencing technology that is a scalable, low-cost approach to DNA sequencing.  The single-use, disposable chips, are the core of the DNA sequencing platform offered by Thermo Fisher Scientific.  By directly sensing the ions produced by template-directed DNA polymerase synthesis using all-natural nucleotides, sequencing data is obtained.  The chips utilize an array of ion-sensitive, field-effect transistor-based sensors in architecture similar to that of CMOS image sensors, along with a corresponding array of micro-wells.  Column parallel readout enables very high-throughput sensing.  Our largest array chip can sense more than 500 million simultaneous sequencing reactions and produce data at a rate of more than 100 Gbps.

  3. Photon-to-Photon CMOS Imager: Optoelectronic 3D Integration

    Gaozhan Cai | Design team leader, focusing on designing custom CMOS image sensors of Caeleste

    3D integration and back-side illumination (BSI) are two key technologies for future high-performance high-speed imaging. We present a method to solve the expected data bottleneck by adding an optical link layer. The 3D integration of electrical ICs is extended with a photonic IC. This photonic IC offers optical links thereby reducing overall system complexity and power consumption, and achieves a very high data rate. The pixel array/wafer is back-side thinned. It is wafer bonded to the “ROIC” (read-out IC) wafer. The ROIC layer encompasses column readout circuits, on-chip CDS circuits, ADCs and the interface to the optical modulators, etc. Through-Silicon vias (TSVs) are embedded in the ROIC wafer, for bonding to the photonic wafer. The photonic wafer has through-oxide vias (TOV) for connection to a package or a PCB with solder bumps. Its essential parts are ring-resonator modulators and “grating type” optical couplers for interconnection to input and output optical fibers.

  4. CMOS Sensor for Space Application

    Dr. Bill Wang | President of CMOS Sensor Inc

    CCD image sensors have been used successfully in orbital platforms for many years.  However, CCD image sensors require several high speed, high voltage clock drivers as well as analog video processors to support their operation.  These support circuits must be shielded and placed in close proximity to the image sensor IC to minimize the introduction of unwanted noise.  The end result is a camera that weighs more and draws more power than desired. CMOS Sensors, on the other hand, allow for the incorporation of clock drivers, timing generators and signal processing onto the same integrated circuit as the image sensor photodiodes.  This keeps noise to a minimum while providing high functionality at reasonable power levels.  CMOS Sensor Inc employs its proprietary advanced CTIA structure and buffer MOS readout method to eliminate the fixed pattern.  Due to performance breakthrough, CMOS Sensor Inc. had involved three different CMOS sensors for space applications.  They are C640 (4000 elements linear sensor) for Terrain Mapping Camera (TMC); C650 (256 x 512 area sensor) for Hyper Spectral Imager (HySI) and C468 (5 band, 12k and 6k elements) for Remote Sensing Instrument (RSI).  The space camera design will become very simple and easy by used the above space qualified CMOS Sensor.  In additional, the space qualified camera is very low weight, low power consumption and high radiation tolerance.  The detail device structure and performance of three space qualified CMOS sensors will present.

  5. Afternoon Networking Break

  6. Going Beyond 2x Optical Zoom In Dual Cameras: The Future of Dual Camera Technology

    DR. Gal Shabtay | General Manager, VP R&D of Corephotonics

    The increasing efforts to make mobile devices slimmer pose height restrictions on the camera module, limiting their performance. Dual cameras offer alternative camera designs which overcome this limit and at the same time introduce new imaging features, previously limited to DSCs and DSLRs, such as optical zoom. In order to achieve high quality optical zoom in a dual camera setup, smartphone manufacturers have started utilizing a wide FOV camera paired with a tele FOV camera. One key challenge in such a dual camera design is its thickness, which is determined by the effective focal length of the Tele camera, hence limiting its maximum zoom range to approximately 2x. As consumers are demanding higher zoom ranges (e.g. up to 5x), it is required to change the architecture of the Tele camera in order to keep a slim camera structure. To that end, it is proposed to fold the light entering the Tele camera and design a folded Tele lens in which the focal length is dramatically larger than that of the Wide camera, while its entrance pupil is respectively increased so that the F/# is kept low and an Optical Image Stabilization (OIS) is redesigned to fit the folded structure. When compared with single camera designs, the result is a 5x optical zoom camera having 5x larger effective resolution and more than 10dB SNR increase in low-light. In order to maximize the experience of this folded camera and maintain a high level of image quality, images from the Wide and Tele cameras are fused together to create the output image, while considering parallax, occlusions and other effects in this session, we will provide more details on such a folded computational camera design and show side-by-side comparisons to leading camera phones.

  7. Image Sensors for the Endoscopy Market: Customer Needs and Innovation Opportunities

    Dave Shafer | Managing Fellow of Intuitive Surgical

    The medical endoscopy market is a small one in terms of volume, and split across many customers with very little in the way of shared platforms or standard applications. However, the impact of these endoscopes on the quality care of patients is enormous, and there are many opportunities to leverage imaging advancements from other applications to improve medical care.

    Endoscopy presents some unfamiliar challenges to an image sensor, including: unusual pixel size and array shape; packaging requirements; connecting the senor to the outside world; and surviving the sterilization process. This talk will focus on the medical market needs and the opportunities for innovative solutions it holds. 

  8. Closing Remarks For The Day

  9. Evening Networking Reception

Day 2 | October 13, 2017

Registration and Opening Remarks

  1. Registration Opens

  2. Welcome & Opening Remarks

Session IV: Augmented and Virtual Reality

This session will explore how image sensors are a part of the emerging AR&VR markets. The presentations will provide the image sensor industry with a platform to drive innovation and exchange ideas to help prepare the sensor industry to aid in moving AR and VR technology forward.

  1. Keynote Presentation from Oculus

    Dr. Chiao Liu, Oculus

    Abstract to come

  2. Image Sensor Technology for AR/VR

    Patrice Roulet, Co-Founder & VP, Technology, ImmerVision

  3. The Power of Interactive VR

    Arnaud Baernhoft, Sr. Virtual Reality & 360 Film Producer, Visualise

    Interactivity plays an important role in the progress of current VR content. As we push experiences to the edge of believability we need to work on that fine line that separates reality and imagination: interactivity. The focus is too often on the conscious influence of experiences: I do this so that this happens. 

    But the way we interact with the real world around us can also be very unconscious. Therefore the scenes and events that unfold in a VR experience could all be different dependent on who watches the experience and how they watch it, without them realizing they are having any impact on the content.

  4. Morning Networking Break

  5. Image sensors for AR/VR systems

    Nick (Seok Hoon) Nam, Director Marketing AR/VR & LCOS, OmniVision

    • Requirements of AR/VR systems
    • Current AR/VR products in the market
    • Moving forward – future solutions
  6. Afternoon Networking Lunch

Session V: Machine Vision and Artificial Intelligence

This session will feature presentations on machine vision, machine learning, and embedded vision.

  1. Morning Networking Break

  2. Will Your Next Sensor Assist in Replacing Your Job?

    Yair Siegel | Director of Strategic Marketing of CEVA

    The sensor evolution to drive the next AI devices has started, yet is far from finished.  This presentation will discuss the recent and future changes in embedded vision processing, artificial intelligence and adjacent technologies with relation to image sensors – how to survive a changing market, yet bring more value out to your customers? What new opportunities are out there as devices become smarter?

  3. Enabling Always –On Machine Vision

    Dr. Evgeni Gousev | Senior Director of Qualcomm Technologies Inc.

    Intelligent devices with human-like senses have enabled a variety of new use cases and applications transforming the way we interact with each other and our surroundings.   While the vast majority (>80%) of human insight comes through eyes, always-on vision remains challenging due to significant power consumption hardware and high complexity of inference algorithms. Qualcomm Research has pioneered an Always-on Computer Vision Module (CVM) combining innovations in the system architecture, ultra-low power designs and dedicated hardware for CV algorithms running at the “edge”.  With low end-to-end power consumption, tiny form factor and low cost, CVM can be integrated into a wide range of battery- and line-powered devices (IoT, mobile, VR/AR, automotive, etc.), performing object detection, feature recognition, change/motion detection, and other applications.  Its processor performs 100% embedded computation within the module itself and outputs metadata.

  4. PanomorphEYE Human Sight Sensor For Artificial Intelligence Revolution

    Patrice Roulet | Director of Engineering and Co-Founder of Technology of Immervision

    Since the mid of 20th century, algorithms have evolved pursuing a holy grail of human intelligence from the Turing machine to the most recent reinforced deep convolutional neural networks.

    In parallel, silicon has also evolved tremendously, from CPU, GPU, and Dedicated Deep Neural Net SoC to the Quantum computing, going even beyond Moore’s predictions.  Yet, vision sensors still remain narrow and limited, nothing comparable with human vision abilities. Deep neural nets are trained with narrow angle monoscopic images giving to AI the sight of a cyclops with a severe glaucoma. In this talk, we will explore how panomorph technology replicates multiple natural biological eyesight. More specifically, how the latest research program - PanomorphEYE is mimicking human sight, enabling the Cambrian explosion for smart devices, leading to AI revolution, path that no one can predict the outcome.

Session VI: Emergent Technology

This session will feature presentations on new technology to meet sensor challenges and demands.

  1. High-Speed Imaging: Core Technologies and Devices Achieved

    Takashi Watanabe | Developer of log-type imagers and range image sensors of Brookman Technology, Inc.

    High-speed imagers are demanded in the various fields such as industrial applications, traffic monitoring, unmanned vehicles, robotiqualcs, moviemaking and scientific imaging. In such applications, global shutter CISs are required to avoid moving artifacts which occurs in normal rolling shutter CISs. The high speed CISs are also required high sensitivity because quantity of incident light is limited by short exposure time and signal-to-noise ratio is very important at normal or low light conditions. Brookman Technology has developed two types of high speed global shutter CISs; BT130A is a 1/2” optical size, 1280(H)x1024(V) effective pixels and can be operated at 1800 fps. BT033A is a 1/2” optical size, 640(H)x480(V) effective pixels and can be operated at 7200 fps. Noise floors of them are around 7 electrons. Such low noise is attributed to True Snapshot PIXEL (TS-PIXEL) which uses charge domain memory in each pixel that enables true CDS operation. High speed readout is realized by high speed column parallel ADC that uses our proprietary technology of cyclic ADC. TS-PIXEL and the cyclic ADC are based on technologies of Shizuoka University. Brookman Technology has developed another type of high speed CIS; BT3300N is the world first full-spec super hi-vision image sensor which consists of 7688(H)x4328(V) pixels and can be operated at high speed of 120 fps, that means 4 G pixels/sec. The devise also uses column parallel cyclic ADC with 14 bit resolution. Optical format is super 35mm (28mm diagonal), readout mode is rolling and dynamic range is more than 78dB. In this conference, we will explain about basic technologies which are used in our high speed CISs and general characteristics of those CISs. Brookman Technology has been developing not only high speed CISs but also other types of CISs such as ultra-sensitivity CISs and time of flight 3D sensing CISs. In the conference, we can introduce such fantastic state-of-the-art CISs.

  2. Tools and Processes Needed to De-risk the Design-In of Image Sensors

    Dr. Simon Che’Rose | Head of Engineering of FRAMOS

    In this age of competition, short design cycles and design risks, sensor manufacturers now need to consider different tools and processes to help enable their customers to more efficiently design their sensor components into imaging devices. FRAMOS will walk the attendees through a concise and detailed process they believe will minimize all inherent risks tied to sensor design-in. This includes reference design kits, software development kits, IP components as well as all pillars tied to sensor integration.

  3. Afternoon Networking Break

  4. Image Sensor Requirements for 3D Cameras

    Rich Hicks | Senior Camera and Imaging Technologist of Intel, Global Supply Management

    Depth cameras, (including Intel RealSenseTM products), are coming to market in increasing numbers delivering usages such as 3D photography, object scanning, indoor mapping and gesture recognition as a few examples.  Unlike traditional photography applications, these cameras analyze the image to extract depth information and therefore have somewhat different requirements in terms of resolution, pixel size, and image quality performance. This presentation will introduce several leading technologies for capturing depth images and based on the depth camera needs analyze the important aspects of supporting image sensors and how they should be optimized for these new applications

  5. Laser Diode Solutions for 3D Depth Sensing LiDAR Systems

    Dr. Tomoko Ohtsuki | Product Line Manager of Lumentum

    LiDAR is a required sensor for level 3&4 ADAS applications. The illumination source for LiDAR systems significantly impacts not only performance, but also reliability and cost. A variety of 3D depth sensing (3DS) diode laser illuminators developed by Lumentum have been adopted and advanced in the consumer electronics market with a solid track record of reliability and quality. New high power VCSEL array technologies are enabling the more demanding and high volume 3DS applications. We will present the robustness of 3DS cameras based on VCSEL array illuminators against sunlight, temperature range -40C to 115C, and other environmental stresses required for automotive-qualified products.

    • Diode laser illuminators for LiDAR Lumentum diode Lasers with track record of reliability, quality, and high volume manufacturing Robustness against environmental stresses required for automotive-qualified products
  6. A Comparison Of Depth Sensing Solutions For Image Sensors, LiDAR And Beyond

    Scott Johnson | Director of Technology Business Alignment of ON Semiconductor

    There are a wide range of means to enable depth sensing in today’s market, including time-of-flight, stereo, structured light, LiDAR, radar and other single-sensor solutions, each with their own pros and cons.  This presentation guides us through the technologies and their most common and best use-case applications as well as a glimpse of what the future holds.   

  7. Closing Remarks and Farewell