H04N5/355

IMAGE SENSING DEVICE
20220329745 · 2022-10-13 ·

An image sensing device is provided to comprise: a pixel array of pixels that are operable to sense light to produce pixel signals and are operable to operate in one of a plurality of modes in sensing of light, wherein a first pixel of the pixel array is controlled to operate in a mode selected from the plurality of modes and configured to output a pixel signal in response to light incident onto the first pixel; and an analog-to-digital converter (ADC) coupled to the pixel array to receive the pixel signal from the first pixel and configured to set, based on the mode selected for the first pixel in generating the pixel signal, an input range indicating a voltage range of the pixel signal and perform an analog to digital conversion of the pixel signal generated by the first pixel to produce pixel data representing the pixel signal based on the input range of the analog-to-digital converter (ADC).

Solid-state imaging device and method of operating the same, and electronic apparatus and method of operating the same

A solid-state imaging device includes a plurality of pixels in a two-dimensional array. Each pixel includes a photoelectric conversion element that converts incident light into electric charge, and a charge holding element that receives the electric charge from the photoelectric conversion element, and transfers the electric charge to a corresponding floating diffusion. The charge holding element further includes a plurality of electrodes.

IMAGE SENSING PIXELS WITH LATERAL OVERFLOW STORAGE
20220337774 · 2022-10-20 · ·

An image sensor includes sensing pixels, each comprising a photodetector in electrical communication with a floating diffusion capacitor via a transfer gate, and a lateral overflow storage capacitor coupled to the floating diffusion capacitor via a lateral overflow control gate. A first readout circuit circuitry located between the transfer gate and the lateral overflow control gate comprises a first amplifier. A second readout circuitry, located opposite the lateral overflow control gate from the first readout circuitry, comprises a second amplifier. Following image integration, charge stored on the floating diffusion capacitor is readout using the first readout circuitry and charge stored on the lateral overflow storage capacitor is readout using the second readout circuitry. In a second readout, charge stored on the photodetector is readout using the first readout circuitry with a first amplification applied and charge stored on the photodetector is readout with a second, different amplification applied.

IMAGE SENSOR INCLUDING DRAM CAPACITOR AND OPERATING METHOD THEREOF
20220337777 · 2022-10-20 ·

An image sensor includes a pixel array having a plurality of pixels; a row driver providing the pixel array with a boosting signal; and a read-out circuit configured to read out pixel signals output from pixels of a row line selected by the row driver. Each of the plurality of pixels includes: a first photodiode; a transmission transistor connected to the first photodiode; a first floating diffusion node, a second floating diffusion node, and a third floating diffusion node, which are connected to the transmission transistor to accumulate charges generated by the first photodiode; an LCG capacitor connected to the third floating diffusion node to accumulate the charges generated by the first photodiode; an MCG transistor connected between the first floating diffusion node and the second floating diffusion node; and an LCG transistor connected to the third floating diffusion node.

Staggered high-dynamic-range image capture with luminance-driven upsampling of pixel-binned image sensor array output

Techniques are described for efficient staggered high-dynamic-range (HDR) output of an image captured using a high-pixel-count image sensor based on pixel binning followed by luminance-guided upsampling. For example, an image sensor array is configured according to a red-green-blue-luminance (RGBL) CFA pattern, such that at least 50-percent of the imaging pixels of the array are luminance (L) pixels. In each image capture time window, multiple (e.g., three) luminance-enhanced (LE) component images are generated. Each LE component image is generated by exposing the image sensor to incident illumination for a respective amount of time, using pixel binning during readout to generate appreciably downsampled color and luminance capture frames, generating an upsampled luminance guide frame from the luminance capture frame, and using the upsampled luminance guide frame to guide upsampling (e.g., and remosaicking) of the color capture frame. The resulting LE components images can be digitally combined to generate an HDR output image.

IMAGING DEVICE
20220321816 · 2022-10-06 ·

An imaging device includes a first substrate, a second substrate, a third substrate, and a switching unit. The first substrate has a pixel including a photodiode and floating diffusion that holds the charge converted by the photodiode. The second substrate has a pixel circuit that reads out a pixel signal based on the charge held in the floating diffusion in the pixel, and is stacked on the first substrate. The third substrate has a processing circuit that detects a pixel signal read out by the pixel circuit, and is stacked on the second substrate. The switching unit is provided to enable electrical connection between the floating diffusion and a floating diffusion of another pixel in the first substrate, and is provided on the second substrate. As a result, by switching the capacitance of the floating diffusion of the pixel using floating diffusion of another pixel, it is possible to switch the charge-voltage conversion efficiency levels.

Sensor system, image processing apparatus, image processing method, and program

A sensor system includes a sensor array and a gradation determination section. The sensor array includes a first sensor and a second sensor. The first sensor is configured to detect, with a first sensitivity, a variation in a quantity of light at a first pixel address. The second sensor is configured to detect, with a second sensitivity that is lower than the first sensitivity, a variation in a quantity of light at a second pixel address that is adjacent to or coincides with the first pixel address. The gradation determination section is configured to determine, when the first sensor generates a first event signal in response to a luminance variation event, a gradation of an object having caused the luminance variation event to occur, depending on whether or not the second sensor generates a second event signal in response to the luminance variation event.

Imaging device including shared pixels and operating method thereof

An operating method of an imaging device comprising a plurality of shared pixels that share a floating diffusion node and each comprising sub-pixels covered by a micro-lens. The method involves generating a capture image from the plurality of shared pixels that receive light reflected from an object; compensating for the capture image using static phase information based on misalignment of the micro lens of each of the plurality of shared pixels; performing auto exposure control based on the compensation of the capture image; performing auto focus control based on the compensated capture image; and generating an output image by processing the compensated capture image.

ELECTRONICALLY CONTROLLED GRADUATED DENSITY FILTERS IN STACKED IMAGE SENSORS
20170374302 · 2017-12-28 · ·

In a digital camera having an imaging array including a plurality of pixels arranged in rows and columns, the digital camera having a mechanical shutter, a method for performing neutral density filtering of images captured by the imaging array, the method comprising opening the mechanical shutter, operating each row in the array by resetting all of the pixel sensors in the row, starting exposure for all of the pixel sensors in the row, closing the mechanical shutter, reading pixel values from the pixels in the array after the mechanical shutter has closed at a time unrelated to a time at which any pixel-select signal was de-asserted, and wherein the interval of time between starting exposure for all of the pixel sensors in the row and closing the mechanical shutter for each row a function of a neutral density filter function applied to an image to be captured.

METHODS AND APPARATUS FOR REDUCING SPATIAL FLICKER ARTIFACTS

Various embodiments of the present technology may comprise a method and apparatus for reducing spatial flicker artifacts in high dynamic range images produced from multiple image captures with differing integration times. The method and apparatus may comprise, in the image captures, measuring pixel intensity selecting pixels with a predetermined intensity thresholds, calculating a ratio based on a channel selection, normalizing the calculated ratio to an ideal ratio, and applying the normalized ratio to corresponding image pixels in a second image capture to produce a flicker-free image.