BOOK NOW

Agenda

Registration and welcome refreshments
SESSION 1: MARKET TRENDS
What in the heck is happening with image sensors product technology?
While image sensors seem to be well established and stable from a marketing viewpoint, there is much going on in the technology arena that is differentiating between design and manufacturing. These include a shift in the technology direction from pixel scaling to adding functionality, from organic microlenses and colour filters to metastructures, from the visible spectrum to extensions to shorter and longer wavelengths, from computational image sensor processing to the quest to incorporate neural networks where image sensors live at the "edge of the cloud". In this talk these areas will be explored (or exposed) by reaching into the reverse engineering imagery and data collected by TechInsights from the latest image sensor products. The goal will be to place them into context of trends past and to provide a "crystal ball" view of upcoming decision points for the technology.
Dan McGrath | Senior Technical Fellow, Image Sensor, Tech Insights
CMOS image sensors industry: new growth cycle
The CMOS image sensor (CIS) industry has entered a renewed growth phase after a challenging 2023. In 2024, revenues rose by 6.4% year-on-year to reach $23.2 billion, driven by the smartphone rebound, expanding automotive and security applications. The market recovery has been supported by domestic demand in China for smartphones and consumer products, while rising competition in areas such as automotive has added further momentum to the industry. On the technology side, the industry is moving beyond pixel scaling toward performance, integration, and versatility. Stacked architectures now account for most production, and progress in 3D sensing, event-based imaging, and metasurface optics is enabling more compact and capable sensors for diverse environments. The presentation will cover CIS market and technology trends, ecosystem dynamics, and new application opportunities.
Anas Chalak | Market & Technology Analyst – Imaging, Yole Group
SESSION 2: TECHNOLOGY BREAKTHROUGHS – BEYOND RGB
The INs and OUTs of stitching
Stitching is a technique used to make large area image sensors, but stitching comes with a price. We will discuss: • A general overview of the history of stitching,
• Stitching with 1 reticle or 2 reticles per layer
• Why is stitching so expensive?
• Why are the fabs hesitating to allow stitching in their (CIS) processes?
• What about stitching in combination with stacking?
Albert Theuwissen | Founder, Harvest Imaging
Networking break
Low-cost SWIR image sensor technology and market status
A number of compelling technologies are competing to establish the low-cost SWIR image sensor market with decent volumes: colloidal quantum dots, organics, Ge-on-Si and low-cost InGaAs. We will review the maturity of these image sensor technologies in terms of performance, reliability and scale-up prospects. We will discuss the markets that are being targeted with their challenges, which will provide a perspective on the path ahead for these technologies.
Andras Pattantyus-Abraham | Founder, Best QD Solutions LLC
Latest updates on colour splitting for ultra-sensitive image sensors
For more than fifty years, the basic method behind colour imaging has changed very little. Nearly all CMOS image sensors still rely on the Bayer colour filter array, a grid of red, green, and blue filters that allows silicon sensors to capture colour. In this mosaic arrangement, each pixel is covered by a single red, green, or blue filter, meaning only about one-third of the incoming light contributes to the pixel’s signal. Consequently, close to seventy percent of photons are lost before they reach the photodetector. This loss reduces signal-to-noise ratio (SNR), a long-standing limitation for users of smartphones and compact cameras. A new phase in imaging is now emerging. Nanophotonic colour splitting technology replaces filtering with a system that guides light through sub-diffraction-limited waveguides. This approach removes the inefficiencies of Bayer-based sensors and opens the way for ultra-compact, high-resolution, and light-efficient cameras in smartphones, XR devices, industrial inspection systems, and medical diagnostics. The presentation will cover this innovation, which replaces the Bayer filter with a nanophotonic waveguide layer that splits light by wavelength and directs it precisely to each pixel. Instead of absorbing unused wavelengths, the design employs vertical waveguides to separate colours, channelling photons with minimal loss and maximum resolution.
Jeroen Hoet | CEO & Co-founder , eyeo
Neural ISP for the small-pixel era: pushing past noise, crosstalk, optics, and Quad/Hex CFA limits on edge devices
As pixel pitches shrink and optics get thinner, image quality is increasingly constrained by shot-noise-limited capture, pixel cross-talk, microlens effects, diffraction/aberrations, and the complexities of Quad/Hex and other modern CFAs. This talk explores how an end-to-end Neural ISP can replace or augment multiple classical pipeline stages to restore detail and color fidelity directly from RAW, while remaining compatible with camera engineering validation. We’ll share practical lessons on robustness, evaluation, and edge deployment realities (latency, memory, quantization, and HDR modes), and close with a forward look at sensor–optics–ISP co-design and the tradeoffs between hardware complexity, cost, and computation.
Tom Bishop, Ph.D., Founder, CTO, Glass Imaging
Networking Lunch
Pixel power: image & depth sensors for the AI revolution
• AR glasses & AI glasses – system architecture, sensor requirements, pixel requirements, compute requirements, camera/imaging requirements
• ISP & SoC: needs for the future
• How are IMUs, cameras & sensors used for AI?
• How does depth sensing fit into the future of AI & AR?
Harish Venkataraman | Principal Engineer, Camera & Depth Systems, Reality Labs, Meta Inc.
Optimizing machine vision through advanced on-chip image sensor and SoC synergy
Vision systems are currently driving a significant technological shift, enabling a rapidly expanding and still emerging set of applications across domains such as humanoid robots, delivery drones, biometric payment, access control, and smart building automation. This ongoing transformation is challenging traditional design paradigms, demanding higher levels of integration, stringent power efficiency, and optimized processing architectures. This presentation examines the technical synergy between advanced on-chip image sensor processing and SoC computation as a key approach to meet these evolving requirements. Key innovations include dual shutter pixel technology that integrates both global and rolling shutter operation, providing flexible frame capture strategies to mitigate motion artifacts while supporting high dynamic range imaging. The sensor architecture incorporates on-chip RGB-IR separation combined with integrated bayerization, IR depollution, and smart upscale algorithms, enabling simultaneous full-resolution RGB and near-infrared imaging optimized for a wide range of illumination conditions, including HDR outdoor scenarios. By offloading computational tasks such as colour separation and preprocessing into the sensor, the data throughput and processing load on the SoC are significantly reduced, enabling the use of lower-power microcontrollers devices without compromising system performance. The presentation will also highlight typical implementation case studies across representative embedded vision scenarios, demonstrating how this sensor-SoC synergy supports efficient, scalable, and adaptable machine vision solutions.
Marie-Charlotte Leclerc | Product Marketing, Industrial, Security & IoT Imaging, STMicroelectronics
Future directions in factory automation image sensors
In recent years, Industrial automation is advancing at a rapid pace. For this, a wide variety of sensing technologies are becoming increasingly important. Learn about Sony's contribution to solving such challenges by utilising its sensing technology including global shutter sensors, SWIR, UV, and EVS.
Roberto Buttaci | Head of Industrial Sensors, Sony Semiconductor Solutions Europe
Networking break
SESSION 3: CONNECTIVITY, PACKAGING, SUPPLY CHAIN
The MIPI A-PHY connectivity standard: the backbone for next-gen sensing
Sensing is rapidly evolving, with autonomous systems demanding increased sensor resolutions, faster refresh rates, and wider colour depths. All this leads to a rise in the bandwidth requirements of connectivity technology. MIPI A-PHY is the first industry standard to deliver multi-gigabit, long-reach connectivity purpose-built for automotive and used for machine vision, endoscopies, and professional audio-video. This session explores how A-PHY enables direct, low-latency connections from cameras, radar, and LiDAR sensors, natively extending the widely used MIPI CSI-2 protocol. We’ll discuss key benefits for OEMs: increased resilience to electromagnetic interferences (EMI), future-proof scalability, interoperability that comes from a multi-vendor ecosystem. Lastly, we'll review how the A-PHY standard enables innovations in sensor technology by allowing vendors to directly integrate A-PHY into the sensor, leading to smaller, lower power, lower cost vision systems. MIPI A-PHY is the first industry standard to deliver multi-gigabit, long-reach connectivity -Purpose-built for automotive and used for machine vision, endoscopies, and professional audio-video -Allows vendors to directly integrate A-PHY into the sensor, leading to smaller, lower power, lower cost vision systems
Arno Distel | Field Application Engineer, Valens Semiconductor
Large body molded image sensor package
Image sensors are proliferating in key application areas such as automotive ADAS and user experience, industrial automation and machine vision, and security/biometric identification. Traditionally, ceramic packages have been used to house image sensors, as these give good optical, thermal and reliability performance. However, such ceramic packages come at a high cost, thus packaging structures migrated to laminate-based packages such as iBGA to provide more attractive price points and help increase the adoption of image sensors in new application spaces. As higher resolution sensors are now being demanded by the market, the image sensor die size is increasing, resulting in the need for larger package sizes which start to approach reliability and mass production efficiency limits of traditional iBGA packages. A new large body molded image sensor has been developed to be able to address larger image sensor package sizes with better reliability performance, whilst also offering productivity improvements in high volume manufacturing operations.
Alastair Attard | Director of Business Development - MEMS & Sensing Applications, UTAC Group
Panel discussion
• What if supply from Taiwan breaks away tomorrow? - Is it an existential risk to sensor and camera OEMs? What are the alternatives? What initiatives are underway to reduce dependency?
• What if China shuts down the supply of rare earths? - Is it an existential risk to sensor and camera OEMs? What are the alternatives? What initiatives are underway to reduce dependency?
• What can be done to optimise the cost of manufacturing? - Is the impact of raw materials significant? Special focus on SWIR sensors.
• How can sensor and camera makers support foundries in maximizing utilisation / cost reduction? Can "out-of-blemish-spec" sensors be put to use?
• How loyal are sensor OEMs to their foundries? Is there enough competition between foundries?
 
Dr. Ronald Müller | Associate Consultant, Smithers & CEO, Vision Markets
Chair’s closing remarks and end of day one
Networking drinks reception
Networking Drinks Reception - The Last Talisman, 171 Bermondsey Street, London, SE1 3UW
At the end of day one of the conference, come together for unrivalled networking with your fellow industry peers over drinks and canapes. The drinks reception will run until 8:30pm, leaving plenty of time for discussion and the opportunity to then extend you evening in the city for dinner.
Registration and morning refreshments
SESSION 4: INDUSTRIAL APPLICATIONS
Advancing global-shutter imaging for mobile and industrial applications: switchable shutter, pixel/area-parallel ADCs, and 3-stack bonding
The presentation introduces Samsung's latest developments in global-shutter imaging. We first present a hybrid-shutter sensor that combines rolling- and global-shutter modes to overcome the fundamental limitations of conventional global-shutter architectures. We then introduce a digital pixel sensor (DPS) with pixel-parallel ADCs, followed by an area-parallel ADC approach that places one ADC per 2×2 pixel block to mitigate pixel-size constraints. Finally, we highlight an advanced 3-stack bonding technology that enables the integration of these new features into a compact and scalable sensor platform.
Dr. Min-Woong Seo | Principal Engineer & Head of Advanced Sensor Design Group, Samsung Electronics
100,000 FPs 1024x1024 gated RGB SPAD image sensor with configurable resolution
We present a 1024x1024 pixel high-speed gated imaging system based on Single-Photon Avalanche Diode (SPAD) technology, available in monochrome and RGB versions. The sensor delivers 100,000 fps in 1-bit mode at full resolution and up to 3 million fps at 512×64, enabling ultrafast transient capture under low-light conditions where conventional sensors may lack sensitivity or SNR. Integrated micro-lenses boost effective fill factor to 75%, improving photon collection without sacrificing spatial uniformity. The system also supports 4, 8, and 10-bit modes at 185 kfps, 11 kfps, and 3 kfps respectively, allowing users to trade temporal resolution for dynamic range and throughput. The reconfigurable readout combines data along X and Y and streams through high-speed LVDS to an FPGA. Because bandwidth is constant, frame rate increases as resolution is reduced. The Y-axis can be resized in 4-pixel increments, while the X-axis provides full/half/quarter modes. This asymmetric flexibility can be exploited in cases where motion is mainly directional (e.g., crash tests or ballistic impacts). Finally, a synchronized gating module enables user-defined exposure windows aligned with pulsed illumination, supporting time-resolved and correlation-based imaging. The system is a powerful tool that proves SPADs as high-performance solution for scientific, industrial, and biomedical ultrafast imaging.
Augusto Carimatto, Head of IC Design, Pi Imaging Technology
 
Real-time camera simulation in simulated industrial machine-vision applications
The design-in of a camera into an industrial machine-vision system requires not only expert knowledge but often also comprises time-consuming testing steps with multiple camera, lens, and lighting module variants. Recently simulation technology has made great progress, and today digital twins are available for a broad variety of industrial equipment. Basler introduces digital twin technology for cameras, lenses and lighting modules, allowing for the quantitatively correct real-time co-simulation of the industrial machine-vision system with its respective application.
 
Dr. Jörg Kunze | Group Leader R&D & Patent Manager, Basler AG
SESSION 5: AUTOMOTIVE APPLICATIONS
Networking Break
Towards an efficient and manufacturable 1.8µm colour router for automotive applications
We provide a method for optimising a nanostructure-based colour router for a RGGB image sensor. Such a device can route light to an intended location according to its wavelength rather than filter it out, which makes it suitable for low-light conditions, such as in modern car vision systems. The colour routing property is based on multiple scattering. It can be achieved by a suitable combination of two materials with different refractive indices. We use SiO2 as the background material into which we embed TiO2 or Si3N4 nano-blocks with a lateral size of 100 nm. The positions of nano-blocks are determined by a multi-objective genetic algorithm. We restrict potential designs to respect the inherent symmetries of the RGGB pattern. Figures of merit are defined for each pixel separately as the root mean square difference between the transmission values of the solution and an “ideal” transmission function. This approach covers optical efficiency performance as well as the low-crosstalk requirement. The optimisation result is a set of high-quality and manufacturable solutions from which one can choose based on additional requirements. With the presented approach, we can obtain designs with optical efficiency (OE) of every single colour between 0.5 and 0.6 for pixel sizes 2.0 µm and 3.6 µm. However, for automotive applications green response is the most important. Our method can provide results with OE higher than 0.7 for the sum of the two green pixels with red and blue responses still outperforming the colour filter-based pixel.
Silvie Luisa Brázdilová | Software Engineer, onsemi
Adaptive-HDR intra-frame pixel architecture for global shutter CMOS image sensors
Achieving true high dynamic range (HDR) in global shutter CMOS image sensors remains a major challenge, especially as pixel sizes shrink and frame rates increase. Conventional solutions—such as multiple exposures, dual conversion gain, or stacked architectures—introduce trade-offs in noise, temporal resolution, and design complexity. In particular, these approaches often suffer from SNR discontinuities and artifacts that degrade image quality across the dynamic range. This work presents an Adaptive-HDR pixel architecture that performs real-time charge-domain modulation during a single exposure, enabling dynamic adaptation to incident light levels at the pixel level. The architecture is fully compatible with standard CMOS processes and preserves true global shutter operation without additional process steps. A 800×600 test image sensor has been fabricated to validate the concept. Measured results demonstrate an extended dynamic range beyond 120 dB, low noise, and a continuous SNR response without mid-range drop, while maintaining high-speed operation. This technology offers a scalable, process-agnostic path to next-generation HDR image sensors for automotive, robotics, and industrial vision applications.
Monica Vatteroni | CEO, EYE2DRIVE
Temperature dependent image sensor performance degradation
Automotive cameras are required to deliver reliable image quality across extreme environmental conditions, yet image sensor evaluation is still predominantly conducted at room temperature. This work presents a systematic investigation of Contrast Transfer Accuracy (CTA), as defined in IEEE 2020:2024, over a junction temperature range of –40 °C to +105 °C for multiple high dynamic range (HDR) sensor technologies, including single-pixel multi-exposure HDR sensors, split-pixel CMOS sensors, and single-pixel LOFIC sensors. Using calibrated Vega and Arcturus light sources, we generated luminance stimuli from 0.01 cd/m² to 1,000,000 cd/m², ensuring CTA results remain independent of input signal scaling and capture the impact of temperature-induced noise and artifacts. The results reveal a significant degradation in CTA under low-light and low-contrast conditions at elevated temperatures, as well as nonlinear performance losses in HDR transition regions where multiple readouts are merged. Comparative analysis across sensor types highlights distinct trade-offs: while all benefit from lower temperatures, their temperature sensitivities and failure modes differ markedly. These findings provide actionable insights for sensor developers, OEMs, and Tier 1s, and support the adoption of temperature-aware testing practices in alignment with IEEE P2020 objectives.
Max Gäde | Head of Image Quality Lab & Innovation, Image Engineering GmbH & Co. KG
Networking lunch
SESSION 6: SCIENTIFIC & DEFENCE APPLICATIONS
Detector needs for astronomy: promising detectors for ESO’s future needs, and the strategy towards curved CMOS
The next generation of instruments and telescopes for astronomy are getting increasingly demanding in terms of optical performance. In some cases, 400 detectors in one focal plane are considered in the design phase, resulting in a complex opto-mechanics, electronics, cooling, and software design. Recent developments of the CMOS technology show an improved performance of the CMOS detectors in term of noise level, getting very close to the performance of the CCDs, while partially reducing the need for cooling and the complexity of the control electronics. Furthermore, curved detectors allow reducing the system complexity of the design of astronomical instruments, while typically increasing optically their field of view and throughput. Besides that, detector curvature also reduces the optics & detector envelope and therefore the total mass, the complexity, and the cost of the system. This paper will present ESO’s plans for selecting the most promising detectors for ESO’s future needs, and the strategy towards curved CMOS.
Alessandro Meoli, Detector Engineer, European Organisation for Astronomical Research in the Southern Hemisphere (ESO)
Olaf Iwert | Detector Systems Department, European Organisation for Astronomical Research in the Southern Hemisphere (ESO)
GSENSE: Quantum to cosmos - imaging without limits high-speed, high-QE, low noise, large-format CMOS sensors for next-generation science
The GSENSE family of scientific CMOS sensors represents a new era in imaging technology, delivering solutions that span the extremes of scale—from ultra-sensitive quantum imaging to massive formats for astronomy. At the forefront, GSENSE6502BSI achieves sub-electron noise combined with high-speed operation up to 300 fps, while a dedicated low-noise mode delivers 0.43 e⁻ rms at 100 fps, enabling photon counting and quantum imaging. Across the family, backside-illuminated architectures provide peak quantum efficiency up to 95%, and advanced HDR readout offers up to 91 dB dynamic range, ensuring exceptional performance for diverse scientific applications. Recent technology developments further expand these capabilities. Proprietary innovations include specialized coatings for wavelength-specific QE optimization, enhancing performance for soft X-ray, EUV, and DUV imaging, and advanced structural designs such as deep trench isolation for optimized backside-illuminated sensor behavior. Combined with large-area GSENSE sensors—scaling up to 92 × 99 mm for astronomy—the GSENSE family empowers researchers to capture everything from the smallest quantum level to the vastness of the cosmos, reinforcing Gpixel’s commitment to enabling next-generation imaging technologies for scientific discovery.
Gpixel - awaiting speaker,
Gaining an awareness advantage with real-time sensor to processing for defence applications
Millions of pieces of space debris smaller than 10 cm orbit the Earth, causing dozens of dangerous situations every day. Many efforts have been made to catalogue this large amount of space debris, most of which track debris from the ground using either optical telescopes and/or active radar technologies. However, these approaches track debris from very large distances and require not only a large initial investment but also considerable computational resources. This presentation introduces a novel approach to space debris tracking and cataloguing: a black and white CMOS detector, a 3U CubeSat orbiting the most populated orbits in LEO. A first stage of image and data processing will be performed on board, with the remainder performed at the ground station. Due to the high cost of launching the satellite, the missions envisaged in this project will follow the "new space" approach in terms of simplicity and efficiency: high processing power will be required both on board and at the ground station to automate this system as much as possible. The result will be a scalable in-situ solution for space traffic monitoring and cataloguing. Hosted payloads will be added to the mission to increase efficiency in terms of payload used per kilogram launched.
Ed Goffin | VP Product Marketing, Pleora Technologies Inc.
Networking break
Vision frontiers in 2026: farming, robots, drones and endoscopes
The global imaging technology market is at a turning point. Traditional applications are becoming increasingly commoditised, while multiple new sectors with potential growth are emerging. We present growth drivers and market penetration rates for these four sectors and will explain the technological requirements, proven solutions, and remaining technical challenges. Attendees will gain a competitive edge by understanding which imaging solutions meet actual market needs and where innovation gaps exist. This presentation bridges the gap between market opportunities and technical realities—essential information for OEMs, sensor developers, and system integrators operating in these four dynamic markets.
Bernd Hofmann, Senior Application Engineer, Macnica ATD Europe
Chair's closing remarks and end of conference
SESSION 7: THE QUANTUM DOT BUBBLE
Panel: Quantum Dots: is there a bubble … ready to burst?
• Quantum Dots: it is not a new technology for imagers. We regularly have presentations on QD at this conference and new companies are entering the field at pace. Is there a big enough market for all of these companies?
• From the performance point of view what is preventing QD becoming an established technology? Leakage current, stability, quantum efficiency blemish, ….?
• Is there a killer application that will bring QD to high volume production?
• Is high performance lead-free QD the Holy Grail of the field?
• Geopolitics, supply chains and sovereignty

Panellists include: Dr Parth Vashishtha, Product Development Manager, Quantum Science Ltd
Renato Turchetta | CEO, IMASENIC
Chair's closing remarks and end of conference