Conference day 1 - 11 March 2020

Conference day 1

Wednesday 11 March

  1. Event registration and morning refreshments

  2. Chair's opening remarks

FUTURE MARKET AND TECHNOLOGY TRENDS KEYNOTES

  1. Market Trends

    Ron Mueller | CEO of Vision Markets of Associate Consultant of Smithers

  2. Sensing technologies and applications

    Tetsuo Nomoto | Head of Sony Semiconductor Solutions Europe, Senior Vice President of Sony Europe B.V.

    This talk introduces advanced sensing technologies enhancing imaging solutions. HDR imaging technology with temperature robustness improves recognition performance for automotive application. The combination of time-of-flight technology and machine learning improves the range accuracy of mobile application. In applications such as factory automation vision sensors are expected for capturing high-speed movement of target objects. Furthermore, non-Si technology that extends the detectable wavelengths to SWIR clearly captures information that has never been found by human perception, and will provides a predictive solution based on machine perception. The imaging technologies promise to accelerate the progress of sensing world by continuously improving image quality, extending detectable wavelengths, and further improving depth resolution and temporal resolution.

  3. From image sensors to sensor fusion: technologies and applications

    Benedetto Vigna | President, Analog, MEMS and Sensors Group of STMicroelectronics

    With the pervasion of mobile consumer products requiring an ever increasing number of components and capabilities and advances in Artificial Intelligence, there is a growing need for image sensors with more features, higher performance and lower power consumption. Furthermore, these imagers have to be integrated along with other sensors via complex system integration.

    STMicroelectronics photonics sensors are addressing these challenges with dedicated technology developments and continuous R&D activities.  With a common CMOS manufacturing platform across a wide range of sensors, including MEMS-based motion and environmental sensors and ranging Time-of-Flight devices, and the addition of a strong microprocessors IP portfolio we have created a full ecosystem that can improve human-machine interaction with optimised sensor fusion.

    The talk will cover current technology development and advances in the manufacturing and design of image sensors expanding to devices for a wide range of sensing applications.

  4. Networking refreshment break

  5. Automotive LiDAR challenges and solutions

    Vladimir Koifman | Chief Technology Officer of Analog Value

    Despite a significant investment over the last 5 years, most LiDAR companies fail to come up with competitive products. The presentation analyses difficulties in creating an automotive-grade LiDAR and typical mistakes that LiDAR companies make. The dead-ends and potentially working solutions are discussed.

  6. Deep Trench Isolation is here to stay!

    Albert Theuwissen | Founder of Harvest Imaging

    For several years deep trench isolation (DTI) is used as a technique to isolate pixels of CMOS imagers, mainly to decrease optical and electrical cross talk.  But very recently the application and advantages of DTI are much broader than just reduction of cross talk.

    By clever use of biasing and/or clocking the gates in the DTIs :

    • Complete pixels can be created in the third dimension (into the silicon),
    • In the case of high-speed burst mode sensors and high-dynamic range sensors, the in-pixel capacitances can be drastically increased,
    • Light sensitivity can be increased, especially in the near-IR part of the spectrum
    • The photodiode can be fully electrically isolated and be used in different electronic configurations.

    The talk will give an overview of the status of the DTI technology and the wide range of architectures in which the DTIs play a crucial role.

  7. Protection of humanity from "Deepfake" by selective sensor signature coupled with an authentication engine.

    Yoel Yaffe | R&D Director of Samsung Semiconductor Israel R&D Center Ltd.

    AI GANs may artificially generate realistic images and videos. A method to reliably identify authentic imaging content on the internet is needed. Conventional digital signatures are not effective since any change to the image invalidates them. We will present a practical yet effective solution to this problem which may be adopted globally with minor impact on the existing ecosystems and the image sensor architecture. In additional, the proposed solution would be extended to video clips.

    • Fake images are here to stay and become the new form of professional content creation.
    • Digital signatures
    • Strict vs selective image authentication
    • Practical methods to compute contextual representation of the image on sensor
    • Contextual representation signature
    • "Fight fire with fire": AI to verify image authenticity
    • The full solution - the image authentication eco system
  8. Networking Lunch

TRACK A - LiDAR & ADVANCES IN IR IMAGING

TRACK A - LiDAR

  1. Equal to the AV task: next-generation LiDAR, next-generation metrics

    Dr. Barry Behnken | Co-Founder & SVP of Engineering of AEye Inc.

    As LiDAR systems adapt to increasingly challenging autonomous vehicle applications, both the technology itself and the metrics by which it is assessed must keep pace with new perception demands of the AV ecosystem. Within the context of “Agile LiDAR,” AEye will discuss the importance of shifting from the traditional metrics of detection range, frame rate, and resolution to the more operationally relevant metrics of classification range, object revisit rate, and instantaneous resolution.

    • How do you measure the operational effectiveness of a LiDAR-based perception system for autonomous vehicles?
    • Which metrics best align with the real-world problems facing autonomous driving? 
    • What architectural changes can be made to LiDAR technologies to better address the problems facing autonomous vehicles?
  2. Scanning Flash LiDAR

    Michael Kiehn | Director Sensor Development of Ibeo Automotive Systems GmbH

    Automation of driving is one of the main topics on the research and development agenda of the automotive industry.  On one side, automotive OEMS are working to achieve the step to level 3 automation of driving in general public traffic. On the other side, shuttle service providers are working on even higher levels of automation for limited use cases.

    Environmental perception is a crucial part of realising automated driving. Today Radar and camera technology are established as sensing technologies for advanced driver assistance systems. Only recently, Audi expanded its sensor suite by a LiDAR sensor. Whereas LiDAR is not necessarily required for level 2 automation it is widely seen as mandatory for higher-level automation.

    Today available LiDAR technology is based on mechanical scanning. This limits the robustness, durability, size and low cost potential of LiDAR sensors. Therefore, there is a demand for solid-state LiDAR technology. Many established and start-up companies came up with a broad range of solutions. Most of them are either MEMS scanning LiDARs or flash LiDARs. Both technologies have their advantages and their limitations. Ibeo’s 4D Solid State LiDAR  actually combines the advantages of scanning with those of solid-state flash LiDARs.

  3. Ge on Si SPADs for LiDAR and quantum technology applications

    Prof Douglas J Paul | EPSRC Established Quantum Technology Fellow of James Watt School of Engineering, University of Glasgow, U.K.

    CMOS single photon avalanche detectors (SPADs) have been commercially available for a number of years and operate in a range of markets including lidar, mobile phones, autonomous vacuum cleaners and biological fluorescence imaging. The use of silicon limits the operation to below 1000 nm wavelength and the indirect bandgap also limits the detector efficiencies significantly above 900 nm. Whilst InGaAs SPADs have been available for many years operating out to 1700 nm, the high cost and low yield have limited the applications to predominantly military and scientific areas where high costs can be tolerated. Germanium is already used for a wide range of photodetectors and operates out to 1600 nm wavelength. Ge on Si SPADs produced using silicon foundry processes where the Ge absorber produces electron-hole pairs for multiplication in a silicon avalanche region will be presented with single photon detection efficiencies up to 38% at 1310 nm at 125 K operating temperatures. The efficiency and dark count rate as a function of area and operating parameters will be investigated along with demonstrations of reduced afterpulsing compared to InGaAs SPADs. The operation as a function of wavelength and temperature will be discussed along with the progress to 1550 nm operating at Peltier cooler temperatures. Examples of the SPAD use in lidar and quantum technology applications will be provided.

  4. Networking break

TRACK A - ADVANCES IN IR IMAGING

  1. Active 3D semantic camera

    Raul Bravo | President, Co founder of Outsight

    Hyperspectral remote sensing refers to remote spectral detection of light, reflected or scattered from a target. Each pixel of a hyperspectral imager can contain hundreds of spectral channels, as opposed to the traditional three-color RGB cameras. Hyperspectral cameras are limited in the accuracy of the spectral signal since any variation in the illumination spectrum translates into a misinterpretation of the target response. We'll introduce a new Active 3D Semantic Camera (hyperspectral).

    • A new kind of sensor, bundling the best of Laser and RGB imaging into a single device. 
    • Single-Sensor Hyperspectral measurements, hundreds of meters away, become possible thanks to a new broadband laser and an original sensor architecture that minimises cost (single detector) and size. 
    • Embedded 3D processing allows multi-dimensional actionable data: depth, material ID, colour, full velocity vector per point, point-wise classification & SLAM on Chip

     

  2. Title to be confirmed

    Patrick Robert, Fellow Expert - Electronic Design, LYNRED

    Session details to follow

  3. Chair's closing remarks and close of conference day one

  4. Networking drinks reception and ISEU2020 Awards

TRACK B - IMAGE PROCESSING & LOWER-POWER SENSORS

TRACK B - IMAGE PROCESSING

  1. Fractional binning

    Jörg Kunze | Team Leader R&D New Technology of Basler

    New CMOS replacements for discontinued CCD sensors often differ in pixel size and thereby cause integration problems in existing applications. Adapting the pixel grid usually requires interpolation. Common interpolation methods create images with inhomogeneous pixel size and gain. This creates implausible EMVA 1288 results and may lead to visible artefacts. We present a novel image interpolation that fixes these problems, thereby performing fractional binning in the digital domain.

    • Replacement sensors with different pixel size pose a problem. 
    • This problem can be solved by applying fractional binning in the digital domain.
    • This allows 1:1 camera replacements
  2. Image processing pipeline for correcting lens aberration artefacts, noise reduction and de-mosaicking

    Ljubomir Jovanov | Researcher of Ghent Uni/imec

    Due to the rapid development of CMOS sensor technology, the resolution of imaging sensors is constantly increasing. While this trend is favourable for many applications, such as surveillance, industrial inspection, medical imaging and security, it also poses numerous challenges for camera hardware and optics. In order to stay on pair with the increase of the sensor resolution, lenses have to be built with a higher precision, which also requires increase in the physical size and the price of the lens. The use of larger lenses is not always physically possible, especially in portable camera systems which leads to inevitable chromatic aberration artefacts. Moreover, due to the reduced size of pixel, the amount of light it receives is significantly smaller, which increases the level of noise dramatically. Finally, a vast majority of the cameras today are using one imaging sensor with spatial colour multiplexing arrays. This reduces the complexity and the price, compared to the three sensor camera, but necessitates the use of more complex pixel processing algorithms.

    We propose a complete video processing chain for correcting lens aberration artefacts, noise reduction and de-mosaicking. The proposed solution is implemented on a GPU and is capable of processing Ultra HD video streams at 30 fps, while offering full flexibility of a software implementation.

  3. AI-in-Imager charge based neural networks open up new event detection, image enhancement and high-speed imaging possibilities

    David Schie | CEO of AIStorm

    For the first time CIS structures can be used directly or in hybrid configurations as the building blocks of neural networks.  AI-in-Imager charge based neural networks open new possibilities for image enhancement and high speed processing.  In this talk we will discuss the technology behind these networks, as well as potential features enabled by this technology.  We will discuss the limitations, especially controller, memory and mobile AI models and examine results from silicon.

TRACK B - LOWER-POWER SENSORS

  1. Simultaneous imaging and energy harvesting in CMOS image sensor pixels

    Sung-Yun Park | Assistant Professor and Assistant Research Scientist of Pusan National University & University of Michigan

    We present a prototype CMOS active pixel that is capable of simultaneous imaging and energy harvesting without introducing additional in-plane p-n junctions. The prototype pixel uses a vertical p+/nwell/psub junction that is available in standard CMOS processes. Unlike the conventional CMOS electron-based imaging pixels, where the nwell region is used as a sensing node for image capture, we adopted a hole-based imaging technique, while exploiting the nwell region for energy harvesting at a high fill-factor of >94%. To verify the feasibility, CMOS image sensors are fabricated and characterized. We successfully demonstrated that the energy harvesting can be achieved with a power density of 998 pW/klux/mm2, while capturing images at 74.67 pJ/pixel. The fabricated prototype device has achieved the highest power density among the recent state-of-the-art works and can self-sustain its image capturing operation at 15 fps without external power sources above ~60 klux of illumination.

  2. Real-time & power-efficient AI close to the sensor

    Ramses Valvekens | Managing Director of easics

    How to develop small, low-power and affordable AI engines that run close to your sensors?

    Designers search for embedded AI solutions that integrate tightly with the sensors such as image sensors, LiDAR, Time-of-Flight, ... A flexible framework is used to automatically generate hardware implementations of the deep neural networks. This scalable AI engine for FPGA and ASIC is ready for the future. A talk for image sensor manufacturers that add AI in their products.

  3. Chair's closing remarks and close of conference day one

  4. Networking drinks reception and ISEU2020 Awards