Ronald Mueller | CEO of Vision Markets and Associate Consultant of Smithers Apex
Bharanidhar Duraisamy | Development Engineer – Radar, Sensor Fusion Expert of Daimler
Omer David Keilaf | CEO and Co-founder of Innoviz Technologies
Fully autonomous vehicles are coming in the not-so-distant future, and it won’t be long before self-driving cars are available to everyone. But what will it take for autonomous driving to go mainstream? What technology is necessary to make it happen? How can the industry make prices reasonable for the masses? What obstacles stand in our way, and how can we overcome them? In this session, Innoviz's CEO & Co-Founder Omer Keilaf will answer these questions as others as he presents a road-map for achieving mass commercialisation of autonomous vehicles.
Stefan Milz | Technical Lead & Valeo Expert (Machine Learning) of Valeo Schalter und Sensoren GmbH
This talk introduces concepts to exploit the sparseness of lidar point clouds for effective deep learning models in the perception of automated driving. On one hand we describe an effective training based on a case study, on the other hand we show how to take advantage of the sparse information for more efficiency.
Raul Bravo | CEO of Dibotics
Making an autonomous car almost 100% sure needs a combination of split-second reactions and complex judgments about the surrounding environment that humans do instinctively.
The human Amygdala allows for fast and effortless (low-power) reaction, the Neocortex allows for complex thinking. The current AI/machine learning efforts to deal with LiDAR raw data are a good analogy of the Brain.
But what about the Amygdala? We'll introduce a method to allow an Artificial Amygdala to exist.
Zain Khawaja | Founder & Lead Technologist of Propelmee Ltd
Propelmee has developed a world first patent-pending technology, which provides AVs with the most rich scene understanding to perceive their environment. Our technology is sensor and vehicle agnostic and segments any obstacle irrespective of its type, size, shape, position, or appearance, and finds the drivable free-space on highways, urban roads, off-road and on roads without lane markings, enabling AVs to drive “mapping-free” on roads they haven’t been on before, just like people.
Scene understanding derived from perception plays a key role in the autonomy stack and is the most challenging task in enabling L5 autonomy. Propelmee aims to enable full autonomy for a range of AVs and is demonstrating its perception and autonomous mobility technology on its last-mile delivery pod, which will operate in complex urban environments with pedestrians, cyclists, and other road users without any “pre-mapping” of the environment beforehand.
Shmoolik Mangan | Algorithms Development Manager of VAYAVISION
The current paradigm of “object fusion” where perception algorithms operate separately for each sensor lacks the full spectrum of data needed to get an accurate understanding of the environment. VAYAVISION presents a different approach - raw data fusion samples from the sensors to construct a high-resolution 3D model of the environment based on cameras, LiDARs and RADARs. Both approaches will be presented including videos of cognition from real-life driving scenarios.
Antonio Garzón | Senior Analyst, Automotive Electronics & Semiconductor of IHS Markit Technology