H04N23/16

Configurable platform

An image sensor assembly includes at least one upconverter configured to detect light in a NIR waveband that is received from an object to be imaged and generate, based on the detected light, upconverted light that is outside of the NIR waveband; and at least one image sensor configured to detect the upconverted light.

Configurable platform

An image sensor assembly includes at least one upconverter configured to detect light in a NIR waveband that is received from an object to be imaged and generate, based on the detected light, upconverted light that is outside of the NIR waveband; and at least one image sensor configured to detect the upconverted light.

In-Situ Composite Focal Plane Array (CFPA) Imaging System Geometric Calibration
20240155263 · 2024-05-09 ·

The present disclosure is directed to composite focal plane array (CFPA) imaging systems and techniques for calibrating such imaging systems. An unmanned aerial vehicle has a CFPA imaging system including a plurality of lens assemblies, a plurality focal plane array (FPA) sensors disposed on a planar substrate, and an image processing module. A first processing node of the module receives overlapping image data from the sensors and generates an update for a sensor calibration model based on key points in the overlapping image data. A plurality of other processing nodes receives image data from the sensors. The sensor calibration model is applied to correct the image data, thereby compiling a composite image.

In-Situ Composite Focal Plane Array (CFPA) Imaging System Geometric Calibration
20240155263 · 2024-05-09 ·

The present disclosure is directed to composite focal plane array (CFPA) imaging systems and techniques for calibrating such imaging systems. An unmanned aerial vehicle has a CFPA imaging system including a plurality of lens assemblies, a plurality focal plane array (FPA) sensors disposed on a planar substrate, and an image processing module. A first processing node of the module receives overlapping image data from the sensors and generates an update for a sensor calibration model based on key points in the overlapping image data. A plurality of other processing nodes receives image data from the sensors. The sensor calibration model is applied to correct the image data, thereby compiling a composite image.

Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints

Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.

Robot vision in autonomous underwater vehicles using the color shift in underwater imaging

A robot vision system for generating a 3D point cloud of a surrounding environment through comparison of unfiltered and filtered images of the surrounding environment. A filtered image is captured using a camera filter which tends to pass certain wavelength bandwidths while mitigating the passage of other bandwidths. A processor receives the unfiltered and filtered images, pixel matches the unfiltered and filtered images, and determines an image distance for each pixel based on comparing the color coordinates determined for that pixel in the unfiltered and filtered image. The image distances determined provides a relative distance from the digital camera to an object or object portion captured by each pixel, and the relative magnitude of all image distances determined for all pixels in the unfiltered and filtered images allows generation of a 3D point cloud representing the object captured in the unfiltered and filtered images.

Capturing and processing of images including occlusions focused on an image sensor by a lens stack array

Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.

Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures

Imager arrays, array camera modules, and array cameras in accordance with embodiments of the invention utilize pixel apertures to control the amount of aliasing present in captured images of a scene. One embodiment includes a plurality of focal planes, control circuitry configured to control the capture of image information by the pixels within the focal planes, and sampling circuitry configured to convert pixel outputs into digital pixel data. In addition, the pixels in the plurality of focal planes include a pixel stack including a microlens and an active area, where light incident on the surface of the microlens is focused onto the active area by the microlens and the active area samples the incident light to capture image information, and the pixel stack defines a pixel area and includes a pixel aperture, where the size of the pixel apertures is smaller than the pixel area.

Systems and methods for generating a digital image using separate color and intensity data
10375369 · 2019-08-06 · ·

A system, method, and computer program product for generating a digital image is disclosed. The system includes a first image sensor configured to capture a first image that includes a plurality of chrominance values, a second image sensor configured to capture a second image that includes a plurality of luminance values, and an image processing subsystem configured to generate a resulting image by combining the plurality of chrominance values with the plurality of luminance values. The first image sensor and the second image sensor may be distinct image sensors optimized for capturing chrominance images or luminance images.

Night sky spatial orientation using color and surface fusion

A method of operating an optical system is described. An image is detected. A digital image signal based on a spatially varying luminance level of the detected image is received. Horizon data, separate from the digital image signal, indicating the position of a horizon where a sky is adjacent the horizon, is received. The position of the horizon is determined based on the received horizon data. A fused image signal is provided based on the received digital image signal and the determined position of the horizon where a sky region indicative of the sky is provided with an enhanced color fused with the varying luminance level of the detected image. A full color display displays a fused image based on the fused image signal. A corresponding optical system is also described.