Conference Day 1 - 24 April 2018

Conference Day 1

Tuesday 24 April – Conference Day 1

  1. Registration and Refreshments

  2. Chair's opening remarks

    Willard Tu | Associate Editor of Sensor Magazine

BUILDING THE ECOSYSTEM FOR NEXT-GENERATION SENSORS

  1. OEM keynote: The mission of Lyft’s L5 Engineering Center

    Romain Clement | Head of Hardware of Lyft Level 5 Engineering Center

    • What is Lyft? What is its mission? 
    • What is the mission of Level 5 Engineering Center (i.e. the division of Lyft entirely focused on the development of self-driving car technology)
    • Lyft's sensors uses for self-driving car technology
    • Sensor evolution aspirations & interesting trends
  2. OEM perspective keynote: Automated and connected vehicles

    Dr Bakhtiar Litkouhi | Technical Fellow and the Manager of Automated Driving and Vehicle Control System of General Motors

  3. Networking break

  4. Tier 1 Keynote

    Frank Gruson | Head of Systems Engineering, Driver Assistance Systems of Continental

  5. Moving from a value chain to a value network: finding your place in the new automotive world order

    Ian Riches | Global Automotive Practice of Strategy Analytics

    The automotive industry is currently experiencing an unprecedented rate of change.  The development of automated driving is challenging the core value proposition of many car makers: the driving experience itself.  Large players from the consumer and IT sectors are seeking to leverage expertise in technologies such as artificial intelligence into the automotive arena.  Traditional Tier Ones are seeking their automaker customers seeking to develop more of their own IP, and start to talk of their need for “manufacturing partners”.  Everybody is looking to monetize “big data”.  In his presentation, Ian will draw on his more than 20 years’ experience as an automotive analyst to paint his vision of what successful automotive company of the 2020s looks like.

  6. Panel: What is needed to reach the next stages of development for vision systems?

    Discussion including Toyota, University of Warwick, Lyft, Strategy Analytics and more.

    Panellists include:

    Rémi Delefosse, Senior Engineer, Toyota Motor Europe

    Valentina Donzella, SCAV MSc Leader, University of Warwick

    Romain Clement, Head of Hardware, Lyft Level 5 Engineering Center

    Ian Riches, Director - Global Automotive Practice, Strategy Analytics

  7. Networking lunch

TECH STREAM

IMAGING AND SENSOR TECHNOLOGY

  1. Mirror replacement Camera Monitor Systems requirements from human perception

    Dr Markus Adameck | General Manager Development of Panasonic Industrial Europe GmbH

    Requirements for indirect camera vision and passive camera sensing.

  2. Lane change assist system including blind spot, lane recognition, vehicle detection

    Florian Baumann | Technical Director of ADASENS Automotive GmbH

    • Details of a lane change assist in a camera mirroring system
    • Advantages and disadvantages of our recent development
    • Challenges and future work / system limits
    • Vehicle detection, inverse perspective mapping and lane recognition techniques
  3. How realistic do simulations have to be?

    Alexander Braun | Professor of Physics of University of Applied Sciences Düsseldorf

    Current ADAS and AD camera systems are tested and validated using (amongst others) a mix of Software-in-the-Loop (SiL) and Hardware-in-the-Loop (HiL) setups. In both cases simulated (i.e. artificial) test data helps verify the limits of the camera system and its algorithms, and allows for parameter studies that are not feasible with a real prototype on the road. A central question for these simulation environments is how realistic they represent the real-world test drives they replace, and ultimately: How realistic do simulations have to be?

    • Photo-realistic is not enough!
    • Validating automotive camera systems with simulations
    • An Universal Optical Model for realistic lens simulations
  4. Networking refreshment break

  5. Title to be announced

    Anand Gopalan | CTO of Velodyne LiDAR, Inc.

  6. 3D ToF SPAD imaging – performance and system designs; a look under the hood

    Carl Jackson | CTO and Founder of SensL Technologies

  7. How to improve robustness of automotive LiDARs

    Celine Canal | Senior Project Leader & ADAS Application Engineer of Quantel Laser

    LiDAR is a hot topic in order to bring autonomous vehicles on the roads. Current technologies designed to achieve a long detection range, a high resolution and a large field of view are based on scanning methods. However, most of them integrate moving optical parts, which tends to reduce the system reliability. This presentation will discuss cutting-edge lidar architectures without any scanning elements, and based on innovative semiconductor illumination solutions developed by Quantel Laser.

    • Trends in solid-state LiDARs
    • Challenges and design considerations of novel beam steering approaches and flash LiDARs
    • Short-pulse edge-emitter diode arrays
  8. Chair's closing remarks

  9. Networking drinks reception

PROCESSING STREAM

NEXT-GENERATION PROCESSING

  1. Transitioning from ADAS to AV, evolution for sensor processing systems from assistance to full autonomy

    Ali Osman Ors | Director, AI and Machine Learning, Automotive Microcontrollers and Processors of NXP Semiconductors

    Currently most autonomy features in production vehicles are limited to driver assistance systems that are classified as L1-L2 level autonomy, with few vehicles equipped with L3 level systems, and most L4-L5 limited to demonstration platforms and test vehicles.  This presentation will provide a summary of the typical L1-L2 systems and how the automotive manufacturers and Tier 1s are looking at moving higher in the autonomy capability scale. We will also focus on the transformative impact that Deep Learning and Autonomous systems are having on the automotive sensor data processor architectures.

  2. High-resolution ranging sensors for automotive - a semiconductor perspective

    Vjekoslav Matic | Director Application and Systems Engineering, Automated Driving Sensors of Infineon Technologies

    Advanced human driver assistance systems in modern cars and future fully autonomous vehicles require highly accurate and reliable 3D environmental and passenger cabin interior information acquired in real-time. An essential part of the sensor cluster are ranging sensors, providing distance and depth information, nowadays not just as simple obstacle detections, but rather as high-resolution 3-dimensional image streams. Such high functional complexity and performance targets, paired with size, cost and safety constraints are achievable only with highly integrated semiconductor SoCs (system-on-chip) components. Due to high integration, such SoCs define prevailingly the most important sensor module benchmarks, which will be demonstrated on examples of Radar, LIDAR and 3DI TOF systems.

  3. High Dynamic Range imaging pipeline for autonomous driving

    Eric Dujardin | Manager of Imaging Architecture of NVIDIA

    Camera Sensors are ubiquitous for autonomous driving and are mounted in different angles of the car, oriented either outside or inside. Cameras have multiples usage as detecting obstacles, replacing mirror, providing additional views for the driver, giving an aide to the driver over an overlooked obstacles etc…

    The current presentation introduces the new capabilities of imaging processing in Nvidia Tegra chip. It highlights how the Imaging Sensor Processor (ISP) integrates features for specific challenges in High Dynamic Range (HDR) image sensors  like exposure fusion, local tone map and deal with the different color filter array in sensor to be more light sensitive.  In addition the imaging processing needs to adapt for both human vision with pleasing image enhancement and machine vision with minimal and specific calculation to avoid false detection.  

  4. Networking refreshment break

  5. How to manage and validate multi gigabit of sensor data in an automotive workflow

    Reinhold Greifenstein | Product Manager Automotive Tools of b-plus GmbH

    The presentation would cover technologies and methods for the validation of upcoming multi-sensor cognitive systems in vehicles and offers the audience an overview of challenges and learnings of many years in validation of Tier 1 ECUs.

    Technologies shown will cover the use cases of benchmarking image sensors up to test drive data acquisition, integrity monitoring of data streams and last, but not least the reinjection of sensor data for virtual validation.

    You will benefit from a focus on imager technologies and demonstrations of best practice.

  6. Functional safety in automotive image processing applications

    Mayank Mangla | ADAS Imaging Architect of Texas Instruments, USA

    • Possible camera faults affecting system safety in an ADAS system
      • Faults inside the image processor
      • Faults outside the image processor
    • Detection and mitigation techniques
      • Through enhanced image processing
      • Leveraging error checking mechanisms offered by image sensor
    • Example implementation on TI’s Driver Assistance Processor
  7. An analysis of cyber attacks vectors on automotive sensors

    Giri Venkat | Technical Marketing, Automotive and Vision of ON Semiconductor

    This presentation will discuss the range of cyber attacks that can be mounted against an autonomous vehicle’s sensor network. Following by a brief overview of the main categories of threats, a few selected threats will receive a more detailed analysis including impact, ease of attack and severity of those selected threats.

  8. Chair's closing remarks

  9. Networking drinks reception