2018 Agenda

The hottest topics in digital imaging technology

Focusing on Vision and Action Within Image Sensors, this year's Image Sensors Americas program will feature 2  days of presentations and panel discussions around the latest leading-edge sensor technology, current capabilities of image sensors, machine learning and deep learning, AI and more.

October 11, 2018 | Day 1

Welcome and Opening Remarks

  1. Registration Opens

  2. Welcome and Opening Remarks

Session I: Innovation in Action

These presentations will cover the changing market and what opportunities there are for image sensor applications and how image sensors are changing to meet these new applications. 

  1. Autonomous Driving The Development of Image Sensors?

    Ronald Mueller | CEO of Vision Markets of Associate Consultant of Smithers Apex

    What are our hopes and expectations of image sensors? An overview of how the market is changing and moving forward and what opportunities are emerging for image sensors. Include M&A

  2. Industry Outlook

    Rudy Burger | Managing Partner of Woodside Capital Partners

    Abstract to come

  3. State of the Art Uncooled InGaAs Short Wave Infrared Sensors

    Dr. Martin H. Ettenberg | President of Princeton Infrared Technologies

    Short wave infrared (SWIR) imaging has made tremendous advancements in the last several years with megapixel imagers now commercially available.  The ability to image beyond the visible spectrum has many advantages including the ability to see eye-safe lasers, detect water, and better penetrate through fog and smoke for long range imaging.  These imagers are capable of thermal imaging objects at temperature >100C through glass which can’t be done with today’s uncooled microbolometer technology.  Beyond imaging, various spectroscopic applications are also available in the SWIR that would be very difficult or impossible in the visible wavelength band.  These imagers are able to cover from 400 nm to 1700 nm with some specialty arrays capable of detection out to 2200 and 2600 nm, which enables hyperspectral imaging to analyze plant health or find defective materials.    Pixel pitch has steadily been reduced from 25 um to 8um today allowing imagers to be produced at a significantly lower cost.  In addition to reduced pitch, imagers are now being manufactured on 100 mm wafers with 150 mm production lines becoming available soon.   Mass production will reduce costs further making them comparable to uncooled long wave infrared technology.   SWIR imagers have been leveraging the technology developed for visible CMOS imagers to produce more sensitive imagers, at higher speeds, greater capabilities, and built in A/Ds with 12 to 14 bits of resolution.   Applications of uncooled SWIR imagers will be discussed along with future directions for the technology such as Avalanche Photodiode Arrays for LIDAR.

  4. Super-Wide-Angle Cameras- The Next Smartphone Frontier Enabled by Miniature Lens Design and the Latest Sensors

    Patrice Roulet Fontani | Vice President,Technology and Co-Founder of ImmerVision

    Smartphone cameras keep improving and threaten to replace DLSRs for consumer, prosumer, and even professional applications. Steven Soderbergh announced he will shot his next movie with a smartphone! Now, multiple smartphone lenses and cameras are “de rigueur” to improve sharpness, detail, contrast, and sensitivity, and each new generation of phone outperforms the last. Smartphone camera field of view, which is generally limited by the main (rear) camera, is the next area for innovation and improvement.  Whereas field of views greater than 90° have been restricted to DSLR and action cameras, advances in lens design, manufacturing, and software capabilities are removing the barriers to super-wide-angle adoption. Miniaturisation, high-resolution lenses and sensors, and easy-to-integrate image processing algorithms allow advanced industrial and system design, and the field of view race started with the most recent smartphone product launches.In this talk, we will explore the latest super wide-angle technology, lens design, and image processing to enable your next smartphone design to meet field of view expectations, and perform like a professional camera.

  5. Morning Networking Break

  6. Presentation by Imec

    Bert Gyselinckx | Vice President & General Manager of Imec

    Abstract to come

Session II: Supporting Technology

This session will explore technology being developed to meet current industry needs and address challenges.

  1. SPAD vs. CMOS Image Sensor Design Challenges – Jitter vs. Noise

    Dr. Daniel Van Blerkom | CTO & Co-Founder of Forza Silicon

    SPAD sensor design faces unique design challenges due to high voltage biasing and the high-speed asynchronous nature of the detectors.  However, there are some crucial commonalities between CMOS image sensors and SPAD sensors that admit similar design approaches and architectural trade-offs.  Key to the performance of SPAD sensors is the detector jitter, which parallels the low noise requirements in CMOS image sensor design.  In this talk, we discuss where we can leverage CMOS image sensor experience in SPAD sensor development, and where the approaches diverge.

  2. sCMOS Technology: The Most Versatile Imaging Tool in Science

    Dr. Scott Metzler | PCO Tech

    Abstract to come. 

  3. Afternoon Networking Lunch

  4. Toward Monolithic Image Perception Devices (MIPD).

    Guy Paillet | co-Founder & CEO of General Vision Inc.

    Advances in semiconductor technologies has made possible a single die device capable of learning and recognizing “visual scenes” in real time at very low power (few mW). The silicon AI technology used in the MIPD allows for real time learning and recognition at a few dollars cost and with a few square millimeter surface. This has the potential to displace the whole market by making image recognition fully commoditized and trainable by user. It is noticeable that a “discrete component” version has been used for fish inspection (more than 50 systems on 24/7) trained and operated by deck-hand on Scandinavian fishing vessels since 2003 fully validating the approach.

  5. From The Outside In

    Richard Neumann | CEO | Sub2R

    A look at the challenges faced by band of industry outsides who have built the first consumer open architecture imaging platform.  Begin by solving our own technical problem.  Building a sensor agnostic digital camera from scratch – how hard can that be?  Documentation – what do you mean it is incomplete, wrong, or non-existent?  What are those undocumented registers for?  We’re only going to buy a few chips today, think we can ask one of your engineers a question?   Writing every algorithm from scratch.  What we’ve learned from over 1,000 people around the world about what they want in a digital camera.  What’s next - Implementing a Convolutional Neural Network (AI) in parallel with a linear legacy image processing pipeline.

Session III: Machine Learning, deep learning. AI etc

This session will feature perspectives from the entire image sensor supply chain, including end-user needs and projections, a panel of sensor manufacturers and more. 

  1. Using Depth Sensing Cameras for 3D Eye Tracking

    Kenneth Funes Mora | CEO and Co-founder of Eyeware

    Machines such as cars, robots, computers or smartphones are not able to accurately track the human gaze in 3D. Sensing the user's attention is a core requirement in order to move towards a natural human-machine interaction. Eyeware is proposing a novel approach for solving the attention sensing problem using depth sensing cameras as sensors, as they are increasingly integrated in consumer devices like smartphones, laptops, robots and cars. The advantages of 3D eye tracking are illustrated with use cases in robotics, automotive, research, retail, and more.

  2. Application Specific

    Presentation to be Confirmed

  3. Depth Sensing (and Computer Vision )

    Presentation To Be Confirmed

  4. Closing Remarks For The Day

  5. Day 1 Networking Reception

October 12, 2018 | Day 2

Registration and Welcome Remarks

  1. Registration Opens & Continental Breakfast

  2. Welcome & Opening Remarks

Session IV: Addressing Industry Needs

This session will explore the current capabilities of image sensors and what challenges need to be met in order to improve their precision and performance for various applications. 

  1. The Use of High Resolution Images with Depth Maps Generated by Camera Clusters

    Semyon Nisenzon | CEO of Cluster Imaging

    All traditional camera images are two-dimensional and visually flat. From these images it is impossible to quickly and accurately make distance and size measurements to create value-added layered data. Cluster Imaging makes all images better with superior depth map processing. Depth maps are a next generation foundation technology enabling accurate image element isolation and visual space identification. Depth information will enable new frontiers in consumer and industrial markets. Using DSLR camera attachment, users will be able to rapidly re/focus, add visual effects and provide instantaneous social media functionality enabling superior high quality DSLR images with smartphone performance. For Automotive industry Cluster Imaging will be able to provide high quality color images with accurate depth which may help, enhanced object recognition, collision avoidance, better parking and will be valuble part of comprehensive imaging solution together with Radar and LiDar technologies. For high end security applications Cluster Imaging will be able to minimize false positive by using depths of the objects, provide better facial recognition with 3D data and deliver integrated 3D view of the scene. For Factory automation applications Cluster Imaging will be able to improve object inspection by providing accurate 3D measurements and assist robotic applications. Other applications may include more information-comprehensive views in drones, and superior augmented-reality features.

  2. SPAD Arrays for LiDAR Applications

    Carl Jackson | CTO and Founder of SensL Division, OnSemi

    LiDAR techniques are now being employed in a range of applications, such as robotics, industrial and automotive, to give a 3D picture of the environment and enabling system autonomy.

    The LIDAR requirements and system design vary from one application to another, but all require a sensor that can efficiently detect the return signal and generate an image. In some cases this is achieved by using a linear array of sensors with a system that employs scanning on both the emitter and receiver line. However, a more efficient set-up is one that uses a scanning emitter with a static or 'staring' receiver. Therefore there is a requirement for imaging sensor arrays that provide high-resolution timing information to form the staring receiver element in these systems. This talk will discuss the requirements of a CMOS SPAD array for such applications.

  3. Super High Sensitivity CMOS Image Sensors Technologies

    Eiichi Funatsu | Senior Director of OmniVision

    The performance of the CMOS image sensors has dramatically improved during the past decade, achieving higher image quality in most situations.  Among these items, low light sensitivity is becoming even more important, especially for security and surveillance applications.  Cameras are required to see under low illumination conditions even better than the human visual system.  Moreover, under zero light conditions, near infrared (NIR) sensitivity is import because NIR illumination is often used.  Lower power dissipation is also becoming critical due to the increase in battery operated camera applications. In this presentation, we will describe OVT's security image sensor technology that has attained industry leading low light sensitivity.  This sensitivity was achieved using ultra low read noise and high quantum efficiency (QE) in both the visible and NIR regions of the spectrum.  This technology also achieves superior image quality across a very wide dynamic range.

  4. Morning Networking Break

  5. Future Image Sensors for SLAM and Indoor 3D Mapping

    Vitality Goncharuk | CEO & Founder | Augmented Pixels

  6. Market Update Panel

    Top Image Sensor Manufacturers

    Hear their perspective on the current market and their hopes and goals for the future of image sensors

  7. Afternoon Networking Lunch

Session V: Future Outlook and Industry Next Steps

This session will feature perspectives from the entire image sensor supply chain, including end-user needs and projections, a panel of sensor manufacturers and more. 

  1. Future Trends in Imaging Beyond the Mobile Market

    Dr. Amos Fenigstein | Senior Director of R&D for Image Sensors of TowerJazz

    Image sensors are now everywhere and, looking at cell phone camera trends, it looks like it won’t take long until they will have features such as global shutter, wide dynamic range, and high frame rate that are currently available only in high-end cameras or industrial sensors. However, industrial sensors are creating a challenge beyond cell phone cameras, from both a practical standpoint and some pure physics limitations. Industrial sensors come in families ranging from a few megapixels to hundreds of megapixels. These sensors demand supreme global shutter performance (80-100dB shutter efficiency), fast frame rate and great resolution (MTF). In turn, this leads to challenging technologies – stitching for large arrays, wafer stacking for the ultimate global shutter, and BSI for high QE and UV capabilities. The short-term and long-term needs will be discussed along with the manufacturing yield challenges.

  2. Photon-Counting Imaging with Quanta Image Sensor for Scientific and Consumer Applications

    Jiaju Ma, CTO, Gigajot

    The Quanta Image Sensor (QIS) concept was proposed in 2005 as the next-generation image sensor technology. A QIS may contain hundreds of millions to billions of specialized pixels, called “jots,” and these jots are sensitive enough to discern the signal generated by individual photons. By scanning the large array of jots at a high frame rate, the temporal and spatial information of every single photon can be accurately recorded by a QIS, enabling features that are not available with the current imaging technologies. QIS is a platform imaging technology and can provide advantages in security, scientific imaging and many consumer applications. For example, low-light imaging is a significant limiting factor for many state-of-the-art image sensors. In contrast, a QIS has the capability of detecting and resolving the number of photons without the use of electron avalanche gain even at room temperature, due to its extremely low read noise and dark current. The photon-counting QIS will help scientists in life science, astronomy, and microscopy to detect even the smallest amount of light. It will also benefit the photographers by enabling high-quality imaging at sparse-light and high-dynamic range imaging conditions and provide a gamut of post-processing options.

  3. Panel: Future Projections and Industry Call to Action

    Panel of camera manufacturers/system houses discuss their needs and performance metrics from sensor designers and industry needs

  4. Closing Remarks and Conclusion of Conference