Ian Riches | VP - Global Automotive Practice of Strategy Analytics
In this presentation, Ian will explain his view of how the market for both ADAS and Automated Driving is evolving, and what that means for sensor demand. As well as presenting this industry-side view, he will also showcase some of the findings of Strategy Analytics User Experience research, and explore some of the high- and low-points of consumer attitudes and experiences.
Dr. Rainer Denkelmann | Global Advanced Chief Engineer Vehicle Systems & Architecture of Aptiv Services Deutschland GmbH
Prof. Bernd Jähne | Heidelberg Collaboratory for Image Processing (HCI), Heidelberg University, Chair, EMVA 1288, and Member of Board of EMVA
The Standard 1288 of the European Machine Vision Association (EMVA, www.emva.org) is now well established, globally accepted, and proves to be a very useful and flexible tool to investigate the properties of cameras for industrial machine vision. With the extension to HDR imaging, a more detailed characterization of the dark current and its temperature dependency, and further extensions, the standard will become very well suited for automotive image sensors also. This talk focuses on:
Richard Schram | Technical Manager of Euro NCAP
Ross Jatou | VP & GM - Automotive Sensing Division of ON Semiconductor
Bharanidhar Duraisamy | Development Engineer – Radar, Sensor Fusion Expert of Daimler
Roland Mattis | VP Partnerships of Fastree3D
For now 3D vision, usually based on LIDAR has been reserved to high vehicles and robotaxis due to limited availability in volume (due to product complexity) and price points in the thousands of dollars The emergence of ultra fast hi resolution Flash LIDAR sensors, allows 3D vision in less than 10 ms below 50usd, hence allowing for mass market application of this technology
Michael Kiehn | Director Sensor Development of Ibeo Automotive Systems GmbH
Automated and autonomous driving are beside e-mobility on top of the agenda of automotive industry. In the field of automated and autonomous driving environmental perception plays a key role. For today’s level 2 applications
RADAR, Video and ultrasonic are established sensor technologies for environmental perception. Most of the player in automated and autonomous driving see additionally to these established sensing technologies the need for LiDAR as a sensing technology. This is because LiDAR can complement the existing sensor portfolio and thus help to overcome flaws of the existing technology. Another important topic is that LiDAR as a very capable technology is required for safety reasons as a redundant sensing path. Adding LiDAR to the existing sensing system of vehicles raises questions about the integration of LiDAR into the vehicle’s body, into the processing structure and into the environmental model. These topics are not fully understood by the automotive OEM but the questions need to be answered by the industry very soon.
Dr. Leilei Shinohara | Vice President & CoPartner of RoboSense LiDAR
The safety of autonomous driving is very important. As ADAS, it is essential to ensure safety while assisting the driver. As the first announced L3 Autonomous Diving Vehicle, Audi A8 has a complete auxiliary system includes a variety of sensor devices such as cameras, ultrasonic, radar and Lidar. Such redundancy is the key to ensure the ASIL-D safety level perception. Especially, currently there are physical limitation of the cameras and radars. LiDAR becomes the most important sensor to enable a higher-level safety autonomous driving system. At present, our MEMS based Smart LiDAR system just released the pre-mass production version. During this presentation, we are going to present our development and results of automotive grade LiDAR. And discuss the challenges during the product development. Once happened to a car's errors of perception in driving, it will have serious consequences, which will also affect the development of autonomous driving and the ADAS industry, and LiDAR as a complementary redundancy is the key to ensuring that each sector is safe. At present, the development of LiDAR is in the early stage, and its development is closely related to package size, vehicle integration, mass production, reliability and cost.
Sebastian Bauer | Marketing Manager of Broadcom Inc
APD detector arrays in CMOS offer high frame rates. An important criterion
for the range of scanning LiDAR sensors is the detectors‘ signal-to-noise
ratio. The better the signal quality, the lower are the evaluation times
per pixel. This reduces the required computing power, improves the frame
rate, and so enables higher travel speeds. CMOS-based APD arrays are setting new standards here.
Dr Pawel Malinowski | Program Manager “User Interfaces & Imagers” of Thin Film Electronics Group, imec
Photodetector pixel stacks based on thin-film absorbers enable facile deposition directly on top of CMOS readout circuits. Polymers and quantum dots with absorption beyond silicon's cut-off wavelength allow for infrared imaging in a range typically accessed with epitaxially grown compound semiconductors. Monolithic integration of thin-film photodiodes makes wafer-scale fabrication possible, which means higher throughput and thus low cost of image sensors. At the same time, pixel density and resolution are only limited by the readout circuit, resulting in compact form factor cameras with high accuracy. In this presentation, we will summarize recent progress on pixel and integration concepts and show our photodetector and image sensor results. Current imec prototypes demonstrate imaging at 940, 1050 and 1450 nm wavelength, with a roadmap towards arrays with pixel- level multispectral sensitivity. This will lead to cost-efficient cameras for applications in security (low-light), machine vision (sorting), consumer (structured light) and automotive (see-through, ranging).
Rosa Maria Vinella | Project and Technical Account Manager of Xenics.NV
Many gated imagers have been demonstrated, but few in the SWIR range with an InGaAs-based imager. The need of extended perception capability with smart, reliable and cost-efficient SWIR sensing system became a requirement to meet the targets of upcoming autonomous driving. In order to fulfil the increasing demand from the automotive and industrial market, Xenics started the design of a novel SWIR gated camera, based on a high speed pulsed laser source. The SWIR gated camera starts integrating the pulses after a variable delay which is calculated on the range distance of the target. The range depth is divided in slices whose range defines the depth resolution of the gated image. Although, opposite to standard laser gated imager, the Xenics gated imager will integrate several reflected pulses per each slice, accumulating photons over multiple integrations in order to trade off required power from the laser illuminator and SNR performance.
The 2020 IS Auto Europe drinks reception is be confirmed shortly
Celine Baron, Staff Automotive Product Marketing Manager, OmniVision
Keita Suzuki | Deputy General Manager of SSS Europe Design Centre of Sony Europe B.V.
Boaz Arad | CTO of Voyage 81
Hyperspectral imaging has well established benefits for object detection and material segmentation, but this imaging modality has not been applicable to automotive vision due to both the cost of spectral imaging systems, as well as constraints on their operation (such as the need for spatial scanning). In this talk, Voyage 81 CTO Boaz Arad will review methods for recovering hyperspectral information from existing low-cost image sensors, such as RGB cameras. These methods allow the acquisition of video-rate hyperspectral images, which can then be used to solve challenging object detection scenarios for ADAS and autonomous vehicles. The talk will review such methods for pixel-level material-sensing object detection, as well as spectrally-motivated methods for low-light imaging.
Speaker to be confirmed
Modar Alaoui | Founder and CEO of Eyeris
This session covers the latest advancements and advantages of fusing a portfolio of vision AI neural networks using multiple cameras with other types of in-cabin sensor data in real-time. Enabling new real-world use cases for enhanced safety and optimized comfort, this session will further cover how the next generation of AI chips will enable efficient inference capable of generating new types of data and monetization models in this third living space.
Upton Bowden | Advanced Technology Strategic Planning Director of Visteon
A comprehensive look at the market for automotive driver monitoring. What solutions are on the market? How are various solutions implemented? What is the future market growth for Driver Monitoring solutions? How could legislation potentially affect driver monitoring adoption rates? What are the use cases for Driver Monitoring? Beyond Driver Monitoring, technology advances enable facial recognition. What are the additional features that facial recognition can unlock? At some point Driver Monitoring will evolve to full in Cockpit Monitoring. What additional technology is required to achieve this? What incremental technology is required? How will Cockpit Monitoring benefit passengers in future vehicles?