Dr Markus Adameck | General Manager of Panasonic Automotive Systems Europe
Ian Riches | VP - Global Automotive Practice of Strategy Analytics
Richard Schram | Technical Manager of Euro NCAP
Timon Rupp | Founder & CEO of The Drivery GmbH
Steve Vozar | Co-Founder and CTO of May Mobility
Jonah Shaver | Product Development Scientist, Sensor Applications of 3M
Dr Hamma Tadjine | Senior Project Leader of IAV
Prof. Dr. Wolfgang Koch | Head of Department Sensor Data and Information Fusion of Fraunhofer Institute for Communication, Information Processing and Ergonomics (FKIE)
Fusion of data streams from heterogeneous sensors is a backbone functionality of AI, which provides situation pictures in complex environments. In particular, multiple sensor data fusion is basic for appropriate action, e.g. controlling autonomously operating vehicles. The presentation draws parallels between applications in the automotive and defence domains, “de-hyping” current AI-based methods and their pros and cons by putting them into their proper place in an overarching fusion architecture.
Dr. Oliver Wasenmüller | Teamleader Machine Vision and Autonomous Vehicle of Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
Tapa Ghosh | CEO of Vathys
This talk will discuss how the need for AI compute is growing at an extremely fast pace in spite of Moore's Law slowing down and stopping. The simultaneous occurrence of these two events will create many challenges and opportunities. The presentation will discuss how innovation in processors can help overcome these challenges in the face of ever growing amounts of data.
John Stockton | SVP of Operations and Technology Strategy of Aeye
While sensors are critical to helping autonomous vehicles understand their surroundings, the massive amount of data they produce takes time and significant computing power to process. A new generation of AI-enabled smart sensors solves this problem by pushing processing to the edge of the network. The presentation will explore how smart sensors’ use of software-configurable hardware and edge processing enables real-time, customized data collection and reduced control loop latency for optimal performance.
Thorsten Wilhelm | Image Analysis Group of TU Dortmund University
Autonomous systems exclusively learn from rich data sets of fully annotated images. The creation of such datasets, especially in semantic image segmentation of complex traffic scenes, where every image pixel has to be annotated by a ground truth class, requires tremendous human efforts and makes the creation costly. Further, if substantial characteristics of the appearance change, i.e., new architectures in street scenes, non-western environments, or new sensors, new datasets have to be created and annotated again at high cost.
In this talk, ways of reducing the amount of labelled data are presented. Starting from methods which learn to segment objects only by being taught if an object is present or not, future directions of using no training labels at all are presented. Further, the importance of generative models is stressed. Uncertainty estimates are a preferable way to communicate classification results to domain experts or to the driver, e.g., when an unknown object appears in front of the car.
Enjoy a drink and use this opportunity to continue the discussions which were started during the conference and network in a relaxed atmosphere.
John Fenske | DLP® Automotive Systems Engineer of Texas Instruments
Audrey Quessada PhD | Team Leader for Conception Algorithms, Comfort & Driving Assistance Systems Business Group of Valeo
As our society is more and more aware of the challenges of autonomous cars and their development, we often forget that car interiors can also bring a new level to the drivers and passengers experience. At Valeo, we are currently developing products that monitor the drivers in order to detect their driving behaviour and their physical and mental ability to drive. We are also monitoring the passengers in order to increase their safety, for instance to optimize the airbags triggering.
The use of gradient boosting methods and LSTM to detect the driver’s drowsiness through the data fusion of gaze and head position detection and vehicle data will be discussed
Henrik Lind | Chief Research Officer of Smart Eye
How driver monitoring is deployed in cars of today and tomorrow – providing enhanced safety of the driver combined with enhanced comfort while driving in mid and higher levels of autonomy. The presentation will also give an overview of the principles of driver monitoring and the sensor properties of interest in the deployment of DMS and cabin monitoring systems.
Axel Koehler | Principal Solution Architect of Nvidia
Discussing how a cloud based simulation platform is being used to provide an accurate representation of the autonomous vehicle in the real world. Bit accurate, timing accurate and capable of driving millions of miles in the cloud in order to validate autonomous vehicles.
Koen De Langhe | Business Line Manager 3D of Siemens PLM Software Simulation & Test Solutions
Radar based ADAS solutions are growing at a substantial rate in the automotive market since they play a fundamental role in increasing passenger and pedestrian safety. To support radar designers in reducing time and design cost, to meet stringent vehicle safety norms, to reduce risks in final installed radar performance etc, more effective and efficient industrial approaches have to be devised both in terms of simulation (working at 77 GHz poses several numerical challenges) and testing.
This paper presents IDS / Siemens PLM Software technical approach and CAE tools, based on a synergetic use of high-fidelity simulation models and measurement set-up’s to support the whole design, optimization and verification cycle, starting from the stand-alone radar antenna performance up to on-car installed radar operational performance verification.
Daniel Van Blerkom | Chief Technology Officer & Co-Founder of Forza Silicon
SPAD sensors for LIDAR combine high speed digital logic with extremely sensitive electro-optical devices. Determining SPAD dark count rate and breakdown voltage variation is critical in order to provide process tuning feedback. In order to separate the SPAD yield and performance from the digital processed output, special considerations must be taken in the sensor design and test system construction. Recent characterization and test experiences with SPAD sensors will be presented.
Dr. Tomoko Ohtsuki | Senior Product Line Manager of Lumentum
This presentation will examine different types of lasers used for 3D depth sensing applications in both automotive and the consumer electronics market, with focus on recent developments in high power VCSEL technology to enable reliable operation under environmental stresses required for automotive-qualified products. We will also introduce a new 1.5um Distributed Bragg Reflector (DBR) laser prototype which will enable frequency modulated continuous wave (FMCW) Coherent LiDAR for >150m range.