2018 Agenda

The hottest topics in digital imaging technology

Focusing on Vision and Action Within Image Sensors, this year's Image Sensors Americas program will feature 2  days of presentations and panel discussions around the latest leading-edge sensor technology, current capabilities of image sensors, machine learning and deep learning, AI and more.

October 11, 2018 | Day 1

Welcome and Opening Remarks

  1. Registration Opens

  2. Welcome and Opening Remarks

Session I: Innovation in Action

These presentations will cover the changing market and what opportunities there are for image sensor applications and how image sensors are changing to meet these new applications. 

  1. Key Challenges of Image Sensors for ADAS and ADS

    Ronald Mueller | CEO of Vision Markets of Associate Consultant of Smithers Apex

    Image sensor developments were driven by large application markets like smart phones, consumer cameras, and security cameras. Due to latest regulatory and technological developments, the automotive image sensor market and its future potential became so attractive that technical innovations in image sensors are driven by tough requirements of Advanced Driver Assistance Systems (ADAS) and Autonomous Driving Systems (ADS). This talk gives an introduction to the automotive imaging field, its image sensor applications, requirements, and the technical advancements which shall also unlock new opportunities in other application areas like machine vision, IoT, and surveillance.

  2. NASA-Jet Propulsion Laboratory Keynote Presentation | Image Sensors for Aerospace

    Dr. Shouleh Nikzad | Senior Research Scientist of NASA-Jet Propulsion Laboratory

    Discoveries beyond those achieved by Hubble Space Telescope (HST), Cassini Space Craft, and Galaxy Evolution Explorer (GALEX) place exacting and challenging requirements on image sensors and detectors. In this talk, we will discuss these requirements and will discuss our work on high dynamic range, high sensitivity, and ultrahigh stability detectors and image sensors.

  3. The M&A and Funding Landscape for Image Sensor Companies

    Rudy Burger | Managing Partner of Woodside Capital Partners

    Financial transactions (both M&A and private placements) have continued over the last few years driven by both the smart phone and autonomous vehicle markets and the continued drive for lower power consumption, improved low light and infrared performance, and specialized computer vision applications.  This presentation will examine the key acquisitions and investments for image sensor companies over the last few years and suggest a few key trends that will likely be driving financial transactions for image sensor companies in the years to come.

  4. State of the Art Uncooled InGaAs Short Wave Infrared Sensors

    Dr. Martin H. Ettenberg | President of Princeton Infrared Technologies

    Short wave infrared (SWIR) imaging has made tremendous advancements in the last several years with megapixel imagers now commercially available.  The ability to image beyond the visible spectrum has many advantages including the ability to see eye-safe lasers, detect water, and better penetrate through fog and smoke for long range imaging.  These imagers are capable of thermal imaging objects at temperature >100C through glass which can’t be done with today’s uncooled microbolometer technology.  Beyond imaging, various spectroscopic applications are also available in the SWIR that would be very difficult or impossible in the visible wavelength band.  These imagers are able to cover from 400 nm to 1700 nm with some specialty arrays capable of detection out to 2200 and 2600 nm, which enables hyperspectral imaging to analyze plant health or find defective materials.    Pixel pitch has steadily been reduced from 25 um to 8um today allowing imagers to be produced at a significantly lower cost.  In addition to reduced pitch, imagers are now being manufactured on 100 mm wafers with 150 mm production lines becoming available soon.   Mass production will reduce costs further making them comparable to uncooled long wave infrared technology.   SWIR imagers have been leveraging the technology developed for visible CMOS imagers to produce more sensitive imagers, at higher speeds, greater capabilities, and built in A/Ds with 12 to 14 bits of resolution.   Applications of uncooled SWIR imagers will be discussed along with future directions for the technology such as Avalanche Photodiode Arrays for LIDAR.

  5. Morning Networking Break

  6. Super-Wide-Angle Cameras- The Next Smartphone Frontier Enabled by Miniature Lens Design and the Latest Sensors

    Patrice Roulet Fontani | Vice President,Technology and Co-Founder of ImmerVision

    Smartphone cameras keep improving and threaten to replace DLSRs for consumer, prosumer, and even professional applications. Steven Soderbergh announced he will shot his next movie with a smartphone! Now, multiple smartphone lenses and cameras are “de rigueur” to improve sharpness, detail, contrast, and sensitivity, and each new generation of phone outperforms the last. Smartphone camera field of view, which is generally limited by the main (rear) camera, is the next area for innovation and improvement.  Whereas field of views greater than 90° have been restricted to DSLR and action cameras, advances in lens design, manufacturing, and software capabilities are removing the barriers to super-wide-angle adoption. Miniaturisation, high-resolution lenses and sensors, and easy-to-integrate image processing algorithms allow advanced industrial and system design, and the field of view race started with the most recent smartphone product launches.In this talk, we will explore the latest super wide-angle technology, lens design, and image processing to enable your next smartphone design to meet field of view expectations, and perform like a professional camera.

  7. Integrated Photonics Technology: Driving the Evolution of Novel Image Sensors for LiDAR and THz imaging

    Bert Gyselinckx | Vice President & General Manager of Imec

    Integrated photonics technology has been a cornerstone for modern high-speed data communication. Now, the dense integration capabilities of CMOS combined with direct light modulation capabilities are also enabling novel imaging solutions. In this talk, we will address the capabilities and deployment of today’s integrated photonics for solid-state LiDAR imagers and THz arrays.

Session II: Supporting Technology

This session will explore technology being developed to meet current industry needs and address challenges.

  1. SPAD vs. CMOS Image Sensor Design Challenges – Jitter vs. Noise

    Dr. Daniel Van Blerkom | CTO & Co-Founder of Forza Silicon

    SPAD sensor design faces unique design challenges due to high voltage biasing and the high-speed asynchronous nature of the detectors.  However, there are some crucial commonalities between CMOS image sensors and SPAD sensors that admit similar design approaches and architectural trade-offs.  Key to the performance of SPAD sensors is the detector jitter, which parallels the low noise requirements in CMOS image sensor design.  In this talk, we discuss where we can leverage CMOS image sensor experience in SPAD sensor development, and where the approaches diverge.

  2. sCMOS Technology: The Most Versatile Imaging Tool in Science

    Dr. Scott Metzler | PCO Tech

    Scientific CMOS (sCMOS) image sensors are nearing a decade of providing a unique and previously unseen combination of high sensitivity, low readout noise, high frame rates, and large fields of view. As sCMOS technology develops and matures, the breadth of imaging applications that can utilize advantages of sCMOS has become quite impressive. Here, we discuss the wide array of current implementations of sCMOS in both life and physical sciences, with particular regard toward historically CCD- or EMCCD- dependent applications. 

  3. Afternoon Networking Lunch

  4. Toward Monolithic Image Perception Devices (MIPD).

    Guy Paillet | co-Founder & CEO of General Vision Inc.

    Advances in semiconductor technologies has made possible a single die device capable of learning and recognizing “visual scenes” in real time at very low power (few mW). The silicon AI technology used in the MIPD allows for real time learning and recognition at a few dollars cost and with a few square millimeter surface. This has the potential to displace the whole market by making image recognition fully commoditized and trainable by user. It is noticeable that a “discrete component” version has been used for fish inspection (more than 50 systems on 24/7) trained and operated by deck-hand on Scandinavian fishing vessels since 2003 fully validating the approach.

  5. From The Outside In

    Richard Neumann | CEO of Sub2R

    A look at the challenges faced by band of industry outsides who have built the first consumer open architecture imaging platform.  Begin by solving our own technical problem.  Building a sensor agnostic digital camera from scratch – how hard can that be?  Documentation – what do you mean it is incomplete, wrong, or non-existent?  What are those undocumented registers for?  We’re only going to buy a few chips today, think we can ask one of your engineers a question?   Writing every algorithm from scratch.  What we’ve learned from over 1,000 people around the world about what they want in a digital camera.  What’s next - Implementing a Convolutional Neural Network (AI) in parallel with a linear legacy image processing pipeline.

  6. Using Depth Sensing Cameras for 3D Eye Tracking Sensing

    Kenneth Funes Mora | CEO and Co-founder of Eyeware

    Machines such as cars, robots, computers or smartphones are not able to accurately track the human gaze in 3D. Sensing the user's attention is a core requirement in order to move towards a natural human-machine interaction. Eyeware is proposing a novel approach for solving the attention sensing problem using depth sensing cameras as sensors, as they are increasingly integrated in consumer devices like smartphones, laptops, robots and cars. The advantages of 3D eye tracking are illustrated with use cases in robotics, automotive, research, retail, and more.

  7. Closing Remarks For The Day

  8. Evening Networking Reception

October 12, 2018 | Day 2

Registration and Welcome Remarks

  1. Registration Opens & Continental Breakfast

  2. Welcome & Opening Remarks

Session III: Addressing Industry Needs

This session will explore the current capabilities of image sensors and what challenges need to be met in order to improve their precision and performance for various applications. 

  1. The Use of High Resolution Images with Depth Maps Generated by Camera Clusters

    Semyon Nisenzon | CEO of Cluster Imaging

    All traditional camera images are two-dimensional and visually flat. From these images it is impossible to quickly and accurately make distance and size measurements to create value-added layered data. Cluster Imaging makes all images better with superior depth map processing. Depth maps are a next generation foundation technology enabling accurate image element isolation and visual space identification. Depth information will enable new frontiers in consumer and industrial markets. Using DSLR camera attachment, users will be able to rapidly re/focus, add visual effects and provide instantaneous social media functionality enabling superior high quality DSLR images with smartphone performance. For Automotive industry Cluster Imaging will be able to provide high quality color images with accurate depth which may help, enhanced object recognition, collision avoidance, better parking and will be valuble part of comprehensive imaging solution together with Radar and LiDar technologies. For high end security applications Cluster Imaging will be able to minimize false positive by using depths of the objects, provide better facial recognition with 3D data and deliver integrated 3D view of the scene. For Factory automation applications Cluster Imaging will be able to improve object inspection by providing accurate 3D measurements and assist robotic applications. Other applications may include more information-comprehensive views in drones, and superior augmented-reality features.

  2. SPAD Arrays for LiDAR Applications

    Wade Appelman | VP of Sales and Marketing of SensL Technologies

    LiDAR techniques are now being employed in a range of applications, such as robotics, industrial and automotive, to give a 3D picture of the environment and enabling system autonomy.

    The LIDAR requirements and system design vary from one application to another, but all require a sensor that can efficiently detect the return signal and generate an image. In some cases this is achieved by using a linear array of sensors with a system that employs scanning on both the emitter and receiver line. However, a more efficient set-up is one that uses a scanning emitter with a static or 'staring' receiver. Therefore there is a requirement for imaging sensor arrays that provide high-resolution timing information to form the staring receiver element in these systems. This talk will discuss the requirements of a CMOS SPAD array for such applications.

  3. Super High Sensitivity CMOS Image Sensors Technologies

    Eiichi Funatsu | Senior Director of OmniVision

    The performance of the CMOS image sensors has dramatically improved during the past decade, achieving higher image quality in most situations.  Among these items, low light sensitivity is becoming even more important, especially for security and surveillance applications.  Cameras are required to see under low illumination conditions even better than the human visual system.  Moreover, under zero light conditions, near infrared (NIR) sensitivity is import because NIR illumination is often used.  Lower power dissipation is also becoming critical due to the increase in battery operated camera applications. In this presentation, we will describe OVT's security image sensor technology that has attained industry leading low light sensitivity.  This sensitivity was achieved using ultra low read noise and high quantum efficiency (QE) in both the visible and NIR regions of the spectrum.  This technology also achieves superior image quality across a very wide dynamic range.

  4. Morning Networking Break

  5. High Dynamic Range Image Sensors, Techniques and Applications

    Benoit Dupont | Head of Business Development of Pyxalis

    Dynamic range is a challenge for most camera based applications. Automobile require it for safety reasons. Cinema and photography industry are craving for extra stops of lights for their camera. Robotic industry needs it to be able to operate in all lightning conditions. Although most sensors and cameras will claim “HDR” on their brochure, this features covers a wide variety of implementations and not all of them are suitable for each and every application. In this presentation, we will discuss the various methods and designs used on sensors to achieve High Dynamic range and present some examples in adequacy with different application profiles.

  6. Future Image Sensors for SLAM and Indoor 3D Mapping

    Vitaliy Goncharuk | CEO/Founder of Augmented Pixels

    Next generation of mobile phones, AR glasses, home robots (i.e. vacuum cleaners) and other mobile devices will all have features such as 3D mapping, semantic understanding of the environment and the real-time cooperation between several devices, including fully autonomous navigation.

    This talk will discuss the requirements for image sensors and prospective advancements which will soon enable cheap and qualitative implementation of indoor 3D mapping and SLAM into mass-market mobile devices.

  7. Afternoon Networking Lunch

  8. How to Keep Your Next Winning Sensor Design From Being Stifled By Your Test Strategy: Bench Testing v Automated Test Equipment

    Lauren Guajardo | Field Applications Engineering Leader of Teradyne

    When the next sensor design is coming out, every advantage is needed to bring the device to market flawlessly. Are opportunities for savings left on the table because testing is overlooked? Device testing is necessary, but it can also be overly complex and resource draining if not met with a strategic plan. Understanding when to use automated test equipment (ATE) instead of bench testing equipment can reduce time to market and offer cost of test savings. This presentation will share case studies that demonstrate the advantage of ATE by providing faster and increased test coverage as well as early prototyping feedback to designers. Prototype testing with ATE When wafer probing is faster than the bench How to increase test coverage at probe Is your bench FPGA masking failures. What to consider when evaluating ATE and cost of test

Session IV: Future Outlook and Industry Next Steps

This session will feature perspectives from the entire image sensor supply chain, including end-user needs and projections, a panel of sensor manufacturers and more. 

  1. Panel: Industry Needs

    Panelists Include: Intel Real Sense and ImmerVision

    With wealth of knowledge and experience in the image sensors industry panelist will share their insights on the current industry and its future and how different member of the image sensors industry can work together. All of the panelist are driving systems that have image sensors as their core but who are dependent on others to supply them. 

    Panelist Include: 

    • Anders Grunnet-Jepson, CTO, Intel Real Sense
    • Peter Milford, formerly the CTO and VP of Engineering and Eyefluence (Google)
    • Patrice Roulet, Vice President Technology & Co-Founder, ImmerVision
  2. Future Trends in Imaging Beyond the Mobile Market

    Dr. Amos Fenigstein | Senior Director of R&D for Image Sensors of TowerJazz

    Image sensors are now everywhere and, looking at cell phone camera trends, it looks like it won’t take long until they will have features such as global shutter, wide dynamic range, and high frame rate that are currently available only in high-end cameras or industrial sensors. However, industrial sensors are creating a challenge beyond cell phone cameras, from both a practical standpoint and some pure physics limitations. Industrial sensors come in families ranging from a few megapixels to hundreds of megapixels. These sensors demand supreme global shutter performance (80-100dB shutter efficiency), fast frame rate and great resolution (MTF). In turn, this leads to challenging technologies – stitching for large arrays, wafer stacking for the ultimate global shutter, and BSI for high QE and UV capabilities. The short-term and long-term needs will be discussed along with the manufacturing yield challenges.

  3. Photon-Counting Imaging with Quanta Image Sensor for Scientific and Consumer Applications

    Dr. Jiaju Ma | CTO of Gigajot

    The Quanta Image Sensor (QIS) concept was proposed in 2005 as the next-generation image sensor technology. A QIS may contain hundreds of millions to billions of specialized pixels, called “jots,” and these jots are sensitive enough to discern the signal generated by individual photons. By scanning the large array of jots at a high frame rate, the temporal and spatial information of every single photon can be accurately recorded by a QIS, enabling features that are not available with the current imaging technologies. QIS is a platform imaging technology and can provide advantages in security, scientific imaging and many consumer applications. For example, low-light imaging is a significant limiting factor for many state-of-the-art image sensors. In contrast, a QIS has the capability of detecting and resolving the number of photons without the use of electron avalanche gain even at room temperature, due to its extremely low read noise and dark current. The photon-counting QIS will help scientists in life science, astronomy, and microscopy to detect even the smallest amount of light. It will also benefit the photographers by enabling high-quality imaging at sparse-light and high-dynamic range imaging conditions and provide a gamut of post-processing options.

  4. Closing Remarks and Conclusion of Conference