H04N23/16

Optical system, imaging device, and imaging system

An optical system includes: a lens group having an optical axis, a focal length of a first light, and a focal length of a second light; and a light splitter disposed at rear side of the lens group and splitting the first light and the second light incident from the lens group respectively, to guide the first light onto first imaging position and guide the second light onto second imaging position. The lens group includes lens elements transmitting the first light and the second light to match the first imaging position with the focal length of the first light and match the second imaging position with the focal length of the second light separately from the first imaging position. The lens element of the lens group is provided in front side of the light splitter with no lens element being provided in the rear side of the light splitter.

PHOTODIODE WITH CONTROLLED DIFFRACTION

An image sensor pixel is disclosed. The sensor pixel includes a photodiode and a diffraction structure. The photodiode includes an avalanche region, and may generate an initial charge carrier using a particular photon received on a first side of the photodiode, and generate an avalanche current in response to a generation, by the initial charge carrier via impact ionization, of multiple additional charge carriers in the avalanche region. The diffraction structure is coupled to a second side of the photodiode opposite the first side, and is configured to reflect a given photon that has passed through the photodiode without generating a corresponding charge carrier, back into the avalanche region.

PHOTODIODE WITH CONTROLLED DIFFRACTION

An image sensor pixel is disclosed. The sensor pixel includes a photodiode and a diffraction structure. The photodiode includes an avalanche region, and may generate an initial charge carrier using a particular photon received on a first side of the photodiode, and generate an avalanche current in response to a generation, by the initial charge carrier via impact ionization, of multiple additional charge carriers in the avalanche region. The diffraction structure is coupled to a second side of the photodiode opposite the first side, and is configured to reflect a given photon that has passed through the photodiode without generating a corresponding charge carrier, back into the avalanche region.

Systems and Methods for Estimating Depth and Visibility from a Reference Viewpoint for Pixels in a Set of Images Captured from Different Viewpoints

Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.

Systems and Methods for Estimating Depth and Visibility from a Reference Viewpoint for Pixels in a Set of Images Captured from Different Viewpoints

Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.

ENDOSCOPE HAVING SIMULTANEOUS MULTI-MODAL IMAGING
20250254405 · 2025-08-07 ·

An imaging system includes an endoscope tube, an illumination system, first and second image sensors, and a controller. The illumination system is coupled to the endoscope tube and configured to emit first illumination light having a first wavelength profile and excitation light having an excitation wavelength profile outside of the first wavelength profile. The first image sensor is aligned with a first filter configured to pass first image light, received in response to the first illumination light, to the first image sensor and to block the excitation light. The second image sensor is aligned with a second filter configured to pass fluorescence light, emitted in response to the excitation light, to the second image sensor. The controller includes logic to simultaneously illuminate a scene with the first illumination light and the excitation light and capture first image data and fluorescence image data with the first and second image sensors.

ENDOSCOPE HAVING SIMULTANEOUS MULTI-MODAL IMAGING
20250254405 · 2025-08-07 ·

An imaging system includes an endoscope tube, an illumination system, first and second image sensors, and a controller. The illumination system is coupled to the endoscope tube and configured to emit first illumination light having a first wavelength profile and excitation light having an excitation wavelength profile outside of the first wavelength profile. The first image sensor is aligned with a first filter configured to pass first image light, received in response to the first illumination light, to the first image sensor and to block the excitation light. The second image sensor is aligned with a second filter configured to pass fluorescence light, emitted in response to the excitation light, to the second image sensor. The controller includes logic to simultaneously illuminate a scene with the first illumination light and the excitation light and capture first image data and fluorescence image data with the first and second image sensors.

DIGITAL CAMERAS WITH DIRECT LUMINANCE AND CHROMINANCE DETECTION

A digital camera system and method for improving low-light performance. The system includes a plurality of digital cameras, each comprising a luminance channel configured to directly detect luminance signals, one or more chrominance channels configured to detect red and blue chrominance signals, and an optical assembly with lenses optimized for light transmission. A processor is configured to combine luminance and chrominance signals from the cameras to generate image data, independently adjust the integration times for each camera based on the image data to form optimized image data, and transmit the optimized image data for use in digital imaging systems, such as those in automobiles. The invention also encompasses a method and a non-transitory computer-readable medium for performing these operations, providing an efficient solution for capturing high-quality images in low-light environments.

DIGITAL CAMERAS WITH DIRECT LUMINANCE AND CHROMINANCE DETECTION

A digital camera system and method for improving low-light performance. The system includes a plurality of digital cameras, each comprising a luminance channel configured to directly detect luminance signals, one or more chrominance channels configured to detect red and blue chrominance signals, and an optical assembly with lenses optimized for light transmission. A processor is configured to combine luminance and chrominance signals from the cameras to generate image data, independently adjust the integration times for each camera based on the image data to form optimized image data, and transmit the optimized image data for use in digital imaging systems, such as those in automobiles. The invention also encompasses a method and a non-transitory computer-readable medium for performing these operations, providing an efficient solution for capturing high-quality images in low-light environments.

Method and apparatus for learning human pose estimation in low-light conditions

Provided is an apparatus for learning human pose estimation by configuring a dataset for human pose estimation by simultaneously obtaining a well-lit image and a low-light image, performing annotation in the well-lit image, and transmitting the annotation to the low-light image. By using the well-lit image included in the dataset as an input of a teacher model and the low-light image as an input of a student model, the student model learns human pose estimation at a high accuracy in low-light conditions by using privileged information of the teacher model.