Register interest


Agenda schedule will take place in Eastern Time.

Trends in (Machine) Vision Markets
2020 was a disaster for all industries! Well, that is not true… In this presentation, Vision Markets shares its view on key target industries of image sensors with a deeper dive on Machine Vision, that is usually only available to Vision Markets’ clients. The insights from its quarterly monitoring of macro-economic indicators and top-companies in key industries contribute to the development of adequate product roadmaps, sales strategies, business forecasts, and marketing budget allocation. Monitored industries include automotive, industrial automation, semiconductors, semiconductor manufacturing equipment, medical, pharma, food, and others.
Lens Abrasion Analysis
GM requires suppliers to run the test (GMW17555) which replicates a standardized mechanical cleaning action and evaluates its effects on the viewing performance over time. For scratched lenses, this method holistically provides measurement values that align subjective and objective evaluations. When a lens is scratched, it disperses light and yields decreased contrast. Degradation of the contrast at the edges in the corresponding test images allows the subjective evaluation to be transformed into objective evaluation. The pass/fail criteria would then be established.
Sai Vishnu Aluru | Subject Matter Expert for Image Quality, General Motors
The One LIDAR to Rule Them All
Martin Lass – Senior Product Marketing Manager Time-of-Flight, Infineon Technologies AG

There are many different architectural approaches for LIDAR systems that address 3D imaging needs of the marketplace. An advanced camera-like Flash LIDAR architecture using a ToF principle is shown to eliminate many of the issues associated with scanning architectures, has high resolution and wide FoV, is fully solid-state and reliable with no moving parts, and because of the simplicity of the architecture, can be easily scaled up to high volume and manufactured at low cost. Details on the capability of the ToF sensors chip as well as system level performance including point clouds will be presented.
Scott Burroughs | Co-Founder and Chief Innovation Officer, Sense Photonics
Smart LiDAR Sensor for ADAS and Autonomous Driving
LiDAR Sensor as a complementary redundancy is the key to ensuring safety in autonomous vehicles. However, the sensor hardware only collects data, thus the AI perception algorithm for data analysis is needed to recognize the types of obstacles. With the integrated AI perception algorithm, the MEMS-based Smart LiDAR Sensor can transform overpriced traditional LiDAR systems known as information collectors to full data analysis and comprehension systems, and reduce the performance and cost requirements for the domain controller within the vehicle for OEMs. This presentation will discuss the necessity and core technology of smart LiDAR Sensor and how it meets all automotive-grade mass production requirements.
Dr. Leilei Shinohara | Vice President of R&D, RoboSense LiDAR
Networking Break
How Anamorphic Freeform Lens Design Enables the Next Generation of Smartphone Video Cinematic Applications
As screen quality and ratio increase from 5:4, 16:9, 18:9 to 21:9 and lenses widen, smartphones with super wide-angle cameras are surpassing the performance of the traditional DSLR. Following the massive adoption of super wide-angle cameras for smartphone photography, the new trend is video. Consumers are producing, viewing, and sharing online movies and videos as these smartphones imitate the theater experience.
The next big differentiator for OEMs is a cinematic movie camera in their smartphone.
In this talk, we will explore how cinematic technology replicates a movie camera in a smartphone, and we will address the complete pipeline from the lens design to image processing software.
Patrice Roulet Fontani | Vice President, Technology and Co-Founder, Immervision Inc.
Short-Wave Infrared Breaking the Status Quo – Identifying Hazards on the Road and Solving the Low Visibility Challenge
One of the major challenges for ADAS & AV is the ability to operate in all weather & lighting conditions. ADAS solution architects are realizing existing sensor fusion fails to detect hazards under common low-visibility conditions when most accidents occur. Meaning machine vision algorithms are unable to make safe driving decisions. The fact that the ADAS technology on the market can reduce the likelihood and severity of an accident is undisputed, but until now, it has not been able to offer a reliable solution. Based on advanced research, TriEye is breaking the sensor fusion status-quo with its industry-first CMOS-based SWIR camera that is finally able to bridge that gap.
Avi Bakal | Co-Founder & CEO, TriEye
Architectural Choices for Near-IR and SWIR 3-D Image Sensors
The architectural choices for the design of 3-D image sensors in the visible and near-infrared (<1.1um) are different than those for short-wave Infrared imagers (<1.5um). These differences are caused by the available detector choices – typically Silicon for near-IR, and InGasAs or Ge for short-wave IR – which translate into different choices for readout designs. Additionally, the wavelength region between 900-1100nm can be well-served by either technology – with different specification tradeoffs. Forza’s talk will use several design examples to demonstrate and highlight these differences.
Barmak Mansoorian | President Emeritus, Forza Silicon
A Review of Depth Sensing Solutions for Automotive Applications
Depth sensing techniques are increasingly being employed in a wide range of markets and use cases to enable algorithms to detect and track objects. Depth sensing is particularly important as part of a complete sensor suite for automotive applications ranging from basic advanced driver-assistance systems to autonomous driving. Each application has a broad range of system parameters such as minimum and maximum distance, field of view, frame rate, and depth precision among many others. System designers have a number of techniques to cover this wide range of use cases, and most vehicles already employ some method of depth sensing by way of stereo cameras, ultrasonic, or radar. But LiDAR sensors offer a unique capability to provide a high-resolution and high-precision depth map of the vehicle’s surroundings, which is key for autonomous perception. In this talk we look at some depth sensing techniques for a LiDAR sensor including time-of-flight and frequency-modulated continuous wave – and give an overview of each approach’s system architecture and implications on performance and cost. This comprehensive review will help system designers make the right decisions in the architecture and selection of components for automotive depth sensing applications.

Bahman Hadji | Director of Business Development, SensL Division, ON Semiconductor
Networking Break
How Lidar Can Drive Autonomous Solution Innovation
In this new era of autonomy, there are exciting use cases for lidar technology emerging every day. These innovations have the potential to improve peoples’ lives in so many ways, including in efficiency, access to products and services, and safety. Lidar technologies can be used to build solutions that include advanced driver assistance systems, autonomous vehicles, mapping, industrial, smart city, drone/unmanned aerial vehicles, robotics, and security. This session will examine how groundbreaking lidar technology can address a range of application requirements, as well as explore the technical, sales and distribution channel service and support that can contribute to market success.
Jon Barad | VP of Business Development, Velodyne Lidar
Ultra Fast Imaging for LiDAR Applications
By the monolithic combination of CMOS and CCD, incredible things are possible. A prerequisite for this is a perfect integration of the CCD functionality in the (standard) CMOS process. For example, CMOS is not suitable for achieving very small leakage currents in transistors. This is also not necessary for most CMOS circuits. However, leakage currents are fatal in imaging. CMOS imaging also suffers from the weakness of low quantum efficiency in the near infrared due to thin EPI layers. But there are also advantages of CMOS. In addition to low costs, these are to be found in the fact that both the drivers of the CCD gates and the conversion of the charge into voltage or into digitized values can be placed in close proximity on the same substrate. This enables clock frequencies with the CCD structures up to the gigahertz range to be achieved. This is the prerequisite for ultra-fast gated imaging, which enables applications such as TOF, LiDAR, FLIM, etc. In addition, CCD allow mathematical operations in the charge domain such as addition, subtraction, multiplication or binning. These operations in the charge domain are not generating additional noise components, such as the kTC noise generated in the voltage or current domain. The presentation talks about the imaging requirements for the said applications and the technological challenges of the implementation of CCD in a standard CMOS process. In addition, some realized examples are presented which show the achieved results.
Beat De Coi | Founder and CEO, ESPROS Photonics AG, Switzerland
End of Day