Conference Day 2 - 10 April 2019

Dont miss the pre-conference workshop: 'Autonomous vehicles and perception sensors: what do we test and how do we make sure we are testing at the right level?' Find out more about this interactive learning opportunity here

View Conference Programme Day 1 | View Conference Programme Day 2

Conference Day 2

Wednesday 10 April - Conference Day 2

  1. Morning Refreshments and Delegate Sign-in

  2. Chair's opening remarks

    Roya Mirhosseini | camera lead of Waymo

ADVANCEMENTS IN CAMERAS, LIDARS AND RADARS

  1. Making cameras self-aware for autonomous driving

    Moritz Bücker | Business Unit Manager of FICOSA / ADASENS Automotive GmbH

    Two fundamental algorithms addressing the issue of making cameras self-aware of their status are proposed: Online Targetless Calibration based on Optical Flow and Blockage Detection based on image quality metrics (e.g. sharpness, saturation). Online Calibration is based on the vanishing point theory while the Soil-/Blockage detection is based on the extraction of image quality metrics and the identification of discriminative feature vectors by a Support-Vector-Machine. Real-world examples will be shown with the algorithms running in real-time.

  2. Embedded sensor fusion and perception for autonomous vehicle

    Dr Christian Laugier | Research Director of The French National Research Institute for Computer Sciences and Control

    A novel Embedded Perception System based on a robust and efficient Bayesian Sensor Fusion approach will be presented. The system provides in real time (1) the state of the dynamic environments of the vehicle (free space, static obstacles, dynamic obstacles along with their respective motion fields, and unknown areas), (2) the predicted upcoming changes of the dynamic environment and (3) the estimated short-term collision risks (about 3s ahead). A license has been sold to Toyota and to EasyMile

  3. Future sensors for autonomous driving: robust obstacle detection under any weather and lighting conditions

    Doron Cohadier | VP of Business Development of Foresight Automotive

    One of the biggest challenges standing in the way of autonomous vehicles is the ability to drive under any weather and lighting conditions. A unique multispectral vision system will be presented, based on seamless fusion of 2 pairs of stereoscopic long-wave infrared and visible-light cameras, enabling highly accurate and reliable obstacle detection. The goal of this presentation is to assess detection accuracy of the vision system using data recorded in severe weather conditions.

  4. Networking break

  5. The marriage between LIDAR and camera

    Filip Geuens | CEO of XenomatiX

    Pressure for a suitable automotive LIDAR is high. Yet, such a LIDAR will not replace cameras. The solution lies in the perfect marriage between camera and LIDAR. Cost-effective redundancy is the sensing solution for self-driving cars. A parallax-error-free overlay between 2D and 3D data is the ultimate type of sensor fusion. A concept to achieve this will be explained and illustrated with example data. Also the integration of LIDAR and camera in cars will be discussed.

  6. The importance of coupling next generation radar with intelligent software and its effect on highly autonomous vehicles

    Tom Driscoll | Co-founder and CTO of Echodyne

    Beyond the well-known benefits that current-generation radar offers over other sensing modalities, next-generation radar promises vast improvements over current automotive radar – longer range, higher resolution and greater accuracy. What’s not well known is the importance of coupling this improved radar performance with equally improved methods for consuming, fusing & processing the data they generate. This presentation will focus on the attributes of next-generation radar for autonomous vehicles.

    Key topics to be covered:

    • Essentials of radar as a sensor
    • Attributes of next-generation radar
    • Comparison to human sensing
    • The benefits of intelligent software with sensors
    • Sensors, fusion, planning & action
  7. Far-Infrared thermal camera: an effortless solution to improve ADAS detection robustness

    Emmanuel Bercier | Strategy and Automotive Market Manager of ULIS-SOFRADIR

    Complementing a set of ADAS sensing technologies, far-Infrared thermal camera provides the key sense to increase systems detection robustness. This enhanced capability relies on the detection of the unique thermal signature of pedestrian, animal or other obstacle in any weather and light conditions, day and night, without suffering glare from sun or light sources. Implementing ADAS based on far-infrared solutions into autonomous vehicle level 3 and above reduces the rate of false positive detection while increasing the true positive rate. As an imaging technology, far-infrared camera technology can be easily implemented into standard ADAS platforms while minimizing computation resources required by detection algorithms thanks to the thermal signature detection. Car maker requirements in terms of supply chain, cost of ownership and also system integration will also be addressed.

  8. Networking Lunch

DATA PROCESSING & MANAGEMENT

  1. Adapting Texas Instruments DLP® technology to demonstrate a phase spatial light modulator for LIDAR applications

    John Fenske | DLP® Automotive Systems Engineer of Texas Instruments

  2. Overview of sensor data and meta information management challenges

    Dr Florian Baumann | CTO EMEA (Specializing in Automotive & AI), of Dell EMC

    • How to deal with huge amounts of data collected in the data acquisition phase?
    • How to store and structure sensor data?
    • How to store and structure meta information?
    • Managing data streaming and compression
    • HiL, Sil, PiL needs with regard to data management processes
  3. The inherent redundancy advantage of low-level sensor fusion for autonomous vehicles

    Dr Youval Nehmadi | CTO & Founder of VAYAVISION

    The sensor set for AD is expected to contain a combination of low and high resolution, image and distance sensors, including cameras, radars and lidars. Low-level sensor fusion uses all sensor to generate a high-density pixel-level joint image-distance HD-RGBd model through upsampling. The presentation describes the development of built-in redundancy mechanisms that compensate for the loss of one sensor through alternative low-level fusion and detection of all the remaining sensors, with known loss of accuracy that can be used by the driving system to continue driving though at lower speed.

  4. Chair's closing remarks and close of IS Auto Europe 2019