Roya Mirhosseini | camera lead of Waymo
Moritz Bücker | Business Unit Manager of FICOSA / ADASENS Automotive GmbH
Two fundamental algorithms addressing the issue of making cameras self-aware of their status are proposed: Online Targetless Calibration based on Optical Flow and Blockage Detection based on image quality metrics (e.g. sharpness, saturation). Online Calibration is based on the vanishing point theory while the Soil-/Blockage detection is based on the extraction of image quality metrics and the identification of discriminative feature vectors by a Support-Vector-Machine. Real-world examples will be shown with the algorithms running in real-time.
Dr Christian Laugier | Research Director of The French National Research Institute for Computer Sciences and Control
A novel Embedded Perception System based on a robust and efficient Bayesian Sensor Fusion approach will be presented. The system provides in real time (1) the state of the dynamic environments of the vehicle (free space, static obstacles, dynamic obstacles along with their respective motion fields, and unknown areas), (2) the predicted upcoming changes of the dynamic environment and (3) the estimated short-term collision risks (about 3s ahead). A license has been sold to Toyota and to EasyMile
Doron Cohadier | VP of Business Development of Foresight Automotive
One of the biggest challenges standing in the way of autonomous vehicles is the ability to drive under any weather and lighting conditions. A unique multispectral vision system will be presented, based on seamless fusion of 2 pairs of stereoscopic long-wave infrared and visible-light cameras, enabling highly accurate and reliable obstacle detection. The goal of this presentation is to assess detection accuracy of the vision system using data recorded in severe weather conditions.
Filip Geuens | CEO of XenomatiX
Pressure for a suitable automotive LIDAR is high. Yet, such a LIDAR will not replace cameras. The solution lies in the perfect marriage between camera and LIDAR. Cost-effective redundancy is the sensing solution for self-driving cars. A parallax-error-free overlay between 2D and 3D data is the ultimate type of sensor fusion. A concept to achieve this will be explained and illustrated with example data. Also the integration of LIDAR and camera in cars will be discussed.
Tom Driscoll | Co-founder and CTO of Echodyne
Beyond the well-known benefits that current-generation radar offers over other sensing modalities, next-generation radar promises vast improvements over current automotive radar – longer range, higher resolution and greater accuracy. What’s not well known is the importance of coupling this improved radar performance with equally improved methods for consuming, fusing & processing the data they generate. This presentation will focus on the attributes of next-generation radar for autonomous vehicles.
Key topics to be covered:
Emmanuel Bercier | Strategy and Automotive Market Manager of ULIS-SOFRADIR
Complementing a set of ADAS sensing technologies, far-Infrared thermal camera provides the key sense to increase systems detection robustness. This enhanced capability relies on the detection of the unique thermal signature of pedestrian, animal or other obstacle in any weather and light conditions, day and night, without suffering glare from sun or light sources. Implementing ADAS based on far-infrared solutions into autonomous vehicle level 3 and above reduces the rate of false positive detection while increasing the true positive rate. As an imaging technology, far-infrared camera technology can be easily implemented into standard ADAS platforms while minimizing computation resources required by detection algorithms thanks to the thermal signature detection. Car maker requirements in terms of supply chain, cost of ownership and also system integration will also be addressed.
John Fenske | DLP® Automotive Systems Engineer of Texas Instruments
Dr Florian Baumann | CTO EMEA (Specializing in Automotive & AI), of Dell EMC
Dr Youval Nehmadi | CTO & Founder of VAYAVISION
The sensor set for AD is expected to contain a combination of low and high resolution, image and distance sensors, including cameras, radars and lidars. Low-level sensor fusion uses all sensor to generate a high-density pixel-level joint image-distance HD-RGBd model through upsampling. The presentation describes the development of built-in redundancy mechanisms that compensate for the loss of one sensor through alternative low-level fusion and detection of all the remaining sensors, with known loss of accuracy that can be used by the driving system to continue driving though at lower speed.