Conference Day 2 - 22 April 2020

Conference Day 2

Wednesday 22 April - Conference Day 2

  1. Morning Refreshments and Delegate Sign-in

  2. Chair's opening remarks

ENABLING ADAS FEATURES

  1. Automated parking: driving a passive camera sensor into its limits to perform like an active sensor

    Dr Markus Adameck | General Manager of Panasonic Automotive Systems Europe

    In the field of automated vehicles more and more sensors are installed in the vehicle to monitor the environment and ego vehicle itself behaviour. We see laser scanners, radar systems, sonar, cameras, different kind of Time of Flight systems. All in different spectral ranges and in different frequencies operating. For safety aspects it is needed to have redundancy in perception.

    Optical flow and stereo camera processing are known well since long time for many functions runs in Embedded environment. Many functions such as Structure from Motion (SFM), ego-motion estimation, navigation, pedestrian/object recognition and others depend on displacement and disparity vector output from the optical flow that comes out of camera processing block. All of them are very promising and known since years but there are not so much applications running in series vehicles as one would expect. One reason might be the difficult integration, the robustness and life time performance.

    This presentation will start with a 2018 benchmark of vehicle capabilities for automated parking systems from series vehicles in Tesla, Daimler and Audi, and its limitations. The presentation continues with the introduction of a camera based 3D sensing technology, driven to its todays limits, acting close to be an active sensing system and to be enabler for much more Applications than today considered.

  2. Detecting pedestrians and keeping the road safe with next-generation radar

    Dr. Noam Arkind | CTO of Arbe

    Next-generation radar will be important in developing safer ADAS systems but will be crucial in order to develop fully autonomous vehicles. Next-generation radar solutions can enable high-resolution sensing for ADAS and autonomous vehicles that is designed to not only detect pedestrians up to a range of 150 meters but can also produce detailed images that identify, track and separate objects so the sensor can differentiate between a pedestrian and stationary objects in the environment. The wide field of view is capable of sensing sidewalks in addition to the lanes on the road and can detect pedestrians that are concealed by objects for maximum safety. Noam will discuss how next-generation radar will keep pedestrians safe on a fully autonomous road.

  3. Enabling road-aware ADAS with hyperspectral 3D semantic camera

    Raul Bravo | President, Co founder of Outsight

    Adverse road conditions, like snow, water or ice, can have a direct impact on the coefficient of friction between the tires and the road surface and thus in the effectiveness of some key ADAS features such as ACC and AEB. Hyperspectral remote sensing refers to remote spectral detection of light, reflected or scattered from a target. Snow, Water or Ice can be detected thanks to their specific spectral signature. These capabilities are well known in satellite observation. But several challenges need to be overcome to use Hyperspectral imaging to enhance car safety, among them the passive nature of current Hyperspectral cameras vs. Lighting conditions and the cost of detecting and processing multiple spectral bands. Active hyperspectral sensing (AHS) refers to a method where the investigated target is illuminated by a broadband light source.

    We'll introduce and explain the technical working principles of a new Active mono-detector SWIR technique to actively detect the hyperspectral response of objects and its application to assess road conditions.

    The explanations will be completed with real field data of different material hyperspectral signatures of materials of interest in the context of ADAS.  

    Finally, the optimisation of a processing method allowing to work on a real-time tunable Multi-spectral detection (vs. Hyperspectral) will be detailed.

  4. Networking break

  5. 4D object detection based on solid-state LiDAR, in all weather conditions

    Filip Geuens | CEO of XenomatiX

    LiDARs will only get a place in vehicles if they prove to do precise and reliable detection of objects. Hence the strong demand for better resolution and sensor fusion.  Yet, there are other ways to improve detection reliability:  not just 3D geometrical information, but also reflected laser power can be used to differentiate between driveable and non-driveable areas. Combining data from consecutive frames further increases robustness of recognising real dangers with high certainty.  This presentation will also address how LiDAR can detect adverse weather conditions.  It is a responsibility of the sensor to indicate when performance degrades.  The presentation will illustrate how a CMOS-based LiDAR system handles this.

TESTING, VALIDATION AND SIMULATION

  1. System requirements and test procedures for camera-monitor-systems

    Prof. Dr. Anestis Terzis | Head of Institute of Communication Technology of Ulm University of Applied Sciences

    Based on ISO 16505 and UN-ECE R.46 the system design of advanced digital architectures is covered. The contribution discusses the requirements, status and future developments of the international regulation of the mirror-replacement technology.

    • System design and test based on ISO 16505 and UN-ECE R.46.
    • Imager and display key parameter.
    • Advanced digital architectures.
    • Status and future developments of the international regulation.
  2. Validation of a physics-based camera simulation

    Rogier van Aken | Research Engineer Advanced of Siemens Digital Industries Software

    Sensor simulations are playing an increasingly important role in the development, testing, and validation of autonomous vehicles. Many of the use cases involved require simulated data to be as close as possible to real world sensor data. This requires a physics based approach, as well as a statement about the correctness of the simulation. A physics based camera simulation requires knowledge of all relevant camera parameters, often only partly available to the user. We present two methods that are new in this field: a method to determine the camera parameters and a method to evaluate and rate the results of the simulation against a real-world camera. Both methods make use of measurements of the camera properties; an example will be presented. The methods are generic and are now available to customers to configure and evaluate a simulation of an automotive camera with Simcenter Prescan, a simulation package for the development of automated vehicles.

    • Sensor simulations are a key element for the development, testing and validation of autonomous vehicles
    • Tasks such as algorithm development, sensor development, training of neural networks and virtual validation require a high degree of physical accuracy, hence:
      • realistic, physically correct sensor simulations
      • a statement of correctness of the simulation.
    • Configuration of a simulated camera is complicated due to lack of knowledge of the relevant camera parameters.
    • We have developed methods to:
      • obtain a camera configuration, based on measurements
      • evaluate and rate the results compared to measurements of a real camera.
      • An example will be presented.
    • Our methods are generic and available to customers to use for other cameras having different parameters. Simcenter Prescan is the first tool in this market that can quantify how well the simulation result matches the actual camera performance.
    • We have similar validation projects for LiDAR and radar simulations.
  3. Networking lunch

INTEGRATING SENSOR DATA INTO THE VEHICLE

  1. Cyber security for image sensors in automotive applications

    Mahabir Gupta | Solutions & Products Consultant - IoT, Mobility & Data Security of Volvo

    We are demanding connectivity and increasing the potential attack surface. With an increasing amount of people getting connected to internet, the security threats that cause massive harm are increasing also. Whenever we think about the cyber security the first thing that comes to our mind is ‘cyber crimes’ which are increasing immensely day by day. Various governments and companies are taking many measures in order to prevent .Besides various measures cyber security is still a very big concern

    The first section discusses the evolution of Automotive DNA. The second section outline Automotive Security Problem – “What people opinion”. The third section describe about type of hacker. The fourth section outlines What Makes Future Cars & Sensors more vulnerable to security threats. The fifth section outline what hardware and software (sensor, Radar, LiDAR) recommendation with respect to security.

  2. A deep dive into accelerating stereo and optical flow

    Dr. Zoran Nikolić | Principal Embedded Computer Vision Architect of NVIDIA

    Optical flow and stereo camera processing are some of the basic building blocks for many functions that run on top of video streams captured by camera inputs.  Optical flow and stereo pre-processing stages are typically located right after the ISP in the processing pipeline.  Many functions such as Structure From Motion (SFM), ego-motion estimation, navigation, pedestrian/object recognition and others depend on displacement vector output from the optical flow and/or disparity information that comes out of the stereo processing block.

    This presentation will delve into NVIDIA's flagship XavierTM system-on-a-chip (SoC), which is designed from the ground up for the automotive and autonomous machines markets.  Xavier can connect and support multiple instances of various sensor modalities, including imaging sensors, radars, LiDARs and others--setting a new performance bar for compute density and AI inference capabilities in automotive embedded processing space.

    The Xavier SoC contains stereo and optical flow hardware accelerators (SOFE) that can be used for optical flow/stereo matching.  The mundane processing required for stereo and optical flow can be offloaded from the GPU on Xavier to the SOFE hardware accelerator.  This technical presentation will go into details of the stereo and optical flow hardware accelerators on the chip. Also planning to cover Drive Works (software) support of our stereo vision and optical flow engine.  Attendees will walk with a deeper understanding of the dedicated engines for stereo and optical flow on Xavier.

  3. Chair's closing remarks and close of 2020 conference