Willard Tu | Associate Editor of Sensor Magazine
Romain Clement | Head of Hardware of Lyft Level 5 Engineering Center
Dr Bakhtiar Litkouhi | Technical Fellow and the Manager of Automated Driving and Vehicle Control System of General Motors
Frank Gruson | Head of Systems Engineering, Driver Assistance Systems of Continental
Ian Riches | Global Automotive Practice of Strategy Analytics
The automotive industry is currently experiencing an unprecedented rate of change. The development of automated driving is challenging the core value proposition of many car makers: the driving experience itself. Large players from the consumer and IT sectors are seeking to leverage expertise in technologies such as artificial intelligence into the automotive arena. Traditional Tier Ones are seeking their automaker customers seeking to develop more of their own IP, and start to talk of their need for “manufacturing partners”. Everybody is looking to monetize “big data”. In his presentation, Ian will draw on his more than 20 years’ experience as an automotive analyst to paint his vision of what successful automotive company of the 2020s looks like.
Discussion including Toyota, University of Warwick, Lyft, Strategy Analytics and more.
Rémi Delefosse, Senior Engineer, Toyota Motor Europe
Valentina Donzella, SCAV MSc Leader, University of Warwick
Romain Clement, Head of Hardware, Lyft Level 5 Engineering Center
Ian Riches, Director - Global Automotive Practice, Strategy Analytics
Dr Markus Adameck | General Manager Development of Panasonic Industrial Europe GmbH
Requirements for indirect camera vision and passive camera sensing.
Florian Baumann | Technical Director of ADASENS Automotive GmbH
Alexander Braun | Professor of Physics of University of Applied Sciences Düsseldorf
Current ADAS and AD camera systems are tested and validated using (amongst others) a mix of Software-in-the-Loop (SiL) and Hardware-in-the-Loop (HiL) setups. In both cases simulated (i.e. artificial) test data helps verify the limits of the camera system and its algorithms, and allows for parameter studies that are not feasible with a real prototype on the road. A central question for these simulation environments is how realistic they represent the real-world test drives they replace, and ultimately: How realistic do simulations have to be?
Anand Gopalan | CTO of Velodyne LiDAR, Inc.
Carl Jackson | CTO and Founder of SensL Technologies
Celine Canal | Senior Project Leader & ADAS Application Engineer of Quantel Laser
LiDAR is a hot topic in order to bring autonomous vehicles on the roads. Current technologies designed to achieve a long detection range, a high resolution and a large field of view are based on scanning methods. However, most of them integrate moving optical parts, which tends to reduce the system reliability. This presentation will discuss cutting-edge lidar architectures without any scanning elements, and based on innovative semiconductor illumination solutions developed by Quantel Laser.
Ali Osman Ors | Director, AI and Machine Learning, Automotive Microcontrollers and Processors of NXP Semiconductors
Currently most autonomy features in production vehicles are limited to driver assistance systems that are classified as L1-L2 level autonomy, with few vehicles equipped with L3 level systems, and most L4-L5 limited to demonstration platforms and test vehicles. This presentation will provide a summary of the typical L1-L2 systems and how the automotive manufacturers and Tier 1s are looking at moving higher in the autonomy capability scale. We will also focus on the transformative impact that Deep Learning and Autonomous systems are having on the automotive sensor data processor architectures.
Vjekoslav Matic | Director Application and Systems Engineering, Automated Driving Sensors of Infineon Technologies
Advanced human driver assistance systems in modern cars and future fully autonomous vehicles require highly accurate and reliable 3D environmental and passenger cabin interior information acquired in real-time. An essential part of the sensor cluster are ranging sensors, providing distance and depth information, nowadays not just as simple obstacle detections, but rather as high-resolution 3-dimensional image streams. Such high functional complexity and performance targets, paired with size, cost and safety constraints are achievable only with highly integrated semiconductor SoCs (system-on-chip) components. Due to high integration, such SoCs define prevailingly the most important sensor module benchmarks, which will be demonstrated on examples of Radar, LIDAR and 3DI TOF systems.
Eric Dujardin | Manager of Imaging Architecture of NVIDIA
Camera Sensors are ubiquitous for autonomous driving and are mounted in different angles of the car, oriented either outside or inside. Cameras have multiples usage as detecting obstacles, replacing mirror, providing additional views for the driver, giving an aide to the driver over an overlooked obstacles etc…
The current presentation introduces the new capabilities of imaging processing in Nvidia Tegra chip. It highlights how the Imaging Sensor Processor (ISP) integrates features for specific challenges in High Dynamic Range (HDR) image sensors like exposure fusion, local tone map and deal with the different color filter array in sensor to be more light sensitive. In addition the imaging processing needs to adapt for both human vision with pleasing image enhancement and machine vision with minimal and specific calculation to avoid false detection.
Reinhold Greifenstein | Product Manager Automotive Tools of b-plus GmbH
The presentation would cover technologies and methods for the validation of upcoming multi-sensor cognitive systems in vehicles and offers the audience an overview of challenges and learnings of many years in validation of Tier 1 ECUs.
Technologies shown will cover the use cases of benchmarking image sensors up to test drive data acquisition, integrity monitoring of data streams and last, but not least the reinjection of sensor data for virtual validation.
You will benefit from a focus on imager technologies and demonstrations of best practice.
Mayank Mangla | ADAS Imaging Architect of Texas Instruments, USA
Giri Venkat | Technical Marketing, Automotive and Vision of ON Semiconductor
This presentation will discuss the range of cyber attacks that can be mounted against an autonomous vehicle’s sensor network. Following by a brief overview of the main categories of threats, a few selected threats will receive a more detailed analysis including impact, ease of attack and severity of those selected threats.