Conference Day 2 - 25 April 2018

Conference Day 2

Wednesday 25 April - Conference Day 2

  1. Morning Refreshments and Delegate Sign-in

  2. Chair's opening remarks

    Ronald Mueller | CEO of Vision Markets and Associate Consultant of Smithers Apex

GETTING THE MOST OUT OF CAMERAS, LIDARS AND RADARS

  1. OEM keynote: Sensor fusion and radars

    Bharanidhar Duraisamy | Development Engineer – Radar, Sensor Fusion Expert of Daimler

  2. Achieving mass commercialisation: what will it take for autonomous vehicles to go mainstream?

    Omer David Keilaf | CEO and Co-founder of Innoviz Technologies

    Fully autonomous vehicles are coming in the not-so-distant future, and it won’t be long before self-driving cars are available to everyone. But what will it take for autonomous driving to go mainstream? What technology is necessary to make it happen? How can the industry make prices reasonable for the masses? What obstacles stand in our way, and how can we overcome them? In this session, Innoviz's CEO & Co-Founder Omer Keilaf will answer these questions as others as he presents a road-map for achieving mass commercialisation of autonomous vehicles.  

  3. Networking refreshment break

AI, COMPUTER VISION, DEEP LEARNING

  1. Deep Learning concepts for sparse LiDAR point cloud processing

    Stefan Milz | Technical Lead & Valeo Expert (Machine Learning) of Valeo Schalter und Sensoren GmbH

    This talk introduces concepts to exploit the sparseness of lidar point clouds for effective deep learning models in the perception of automated driving. On one hand we describe an effective training based on a case study, on the other hand we show how to take advantage of the sparse information for more efficiency.

    • LiDAR
    • Sparse Annotation
    • Depth Estimation
    • Efficient Deep Learning
  2. Using LiDAR as the amygdala of the autonomous car

    Raul Bravo | CEO of Dibotics

    Making an autonomous car almost 100% sure needs a combination of split-second reactions and complex judgments about the surrounding environment that humans do instinctively.

    The human Amygdala allows for fast and effortless (low-power) reaction, the Neocortex allows for complex thinking.   The current AI/machine learning efforts to deal with LiDAR raw data are a good analogy of the Brain. 

    But what about the Amygdala?  We'll introduce a method to allow an Artificial Amygdala to exist.

    • Finding the right trade-off between "Raw data only" (centralized ECU)  vs. "Objects only" (Edge computing) is not the only approach to optimize sensor data processing
    • An Artificial Amygdala creates a third approach for safer (deterministic), low-power and fast sensor processing
    • Ilustrations of this concept with actual results in real-life driving conditions using LiDAR, showing advanced features like Point wise classification and Object detection & tracking in real-time   
  3. The importance of perception in reaching L5 autonomy

    Zain Khawaja | Founder & Lead Technologist of Propelmee Ltd

    Propelmee has developed a world first patent-pending technology, which provides AVs with the most rich scene understanding to perceive their environment. Our technology is sensor and vehicle agnostic and segments any obstacle irrespective of its type, size, shape, position, or appearance, and finds the drivable free-space on highways, urban roads, off-road and on roads without lane markings, enabling AVs to drive “mapping-free” on roads they haven’t been on before, just like people.

    Scene understanding derived from perception plays a key role in the autonomy stack and is the most challenging task in enabling L5 autonomy. Propelmee aims to enable full autonomy for a range of AVs and is demonstrating its perception and autonomous mobility technology on its last-mile delivery pod, which will operate in complex urban environments with pedestrians, cyclists, and other road users without any “pre-mapping” of the environment beforehand.

  4. Networking lunch

SENSOR FUSION

  1. Object Fusion and Raw Data Fusion; which can bring autonomy to vehicles?

    Shmoolik Mangan | Algorithms Development Manager of VAYAVISION

    The current paradigm of “object fusion” where perception algorithms operate separately for each sensor lacks the full spectrum of data needed to get an accurate understanding of the environment. VAYAVISION presents a different approach - raw data fusion samples from the sensors to construct a high-resolution 3D model of the environment based on cameras, LiDARs and RADARs. Both approaches will be presented including videos of cognition from real-life driving scenarios.

    • Object Fusion vs Raw-data Fusion
    • Using upsampling to increase sensor resolution
    • Examples for free space estimation and object detection
    • Roadmap to L4/L5 autonomous vehicles
  2. Sensor fusion and new E/E architectures for autonomous driving

    Antonio Garzón | Senior Analyst, Automotive Electronics & Semiconductor of IHS Markit Technology

    • Automotive electronics and semiconductor market trends
    • ADAS architectures towards automated driving
      • Current state of ADAS sensor architectures
      • Key technologies enabling next-gen ADAS architectures
      • Cost analysis of various architectures today and in future
    • Solid state LIDAR: review of various technologies (specs, costs) and forecast by technology
  3. Chair's closing remarks and close of IS Auto 2018