Patent classifications
H04N25/40
Dynamic vision sensor and image processing device including the same
A dynamic vision sensor may include a pixel array including at least a first photoreceptor and a second photoreceptor, the first photoreceptor and the second photoreceptor including at least one first pixel and at least one second pixel, respectively, the at least one first pixel and the at least one second pixel configured to generate at least one first photocurrent and at least one second photocurrent in response to an incident light, respectively, and the first photoreceptor and the second photoreceptor configured to a first and second log voltages based on the at least one first photocurrent and the at least one second photocurrent, respectively, processing circuitry configured to, amplify the first and second log voltages, detect a change in intensity of the light based on the amplified first log voltage, the amplified second log voltage, and a reference voltage, and output an event signal corresponding to the detected value.
Solid-state image sensor, imaging device, and method of controlling solid-state image sensor
To improve image quality of image data in a solid-state image sensor that detects an address event. The solid-state image sensor includes a photodiode, a pixel signal generation unit, and a detection unit. In the solid-state image sensor, the photodiode generates electrons and holes by photoelectric conversion. The pixel signal generation unit generates a pixel signal having a voltage according to an amount of one of the electrons and the holes. The detection unit detects whether or not a change amount in the other of the electrons and the holes has exceeded a predetermined threshold and outputs a detection signal.
Camera agnostic core monitor incorporating projected images with high spatial frequency
A camera agnostic core monitor for an enhanced flight vision system (EFVS) is disclosed. In embodiments, a structured light projector (SLP) generates and projects a precise geometric pattern or other like artifact, which is reflected by collimating elements into the EFVS optical path. Within the optical path, the EFVS focal plane array is illuminated by, and detects, the projected artifacts within the scene imagery captured for display by the EFVS. Image processors assess the presentation of the detected artifacts (e.g., position/orientation relative to the expected presentation of the detected artifact within the scene imagery) to verify that the displayed EFVS imagery is not misleading.
Obtaining image data of an object in a scene
A method and processor system are provided which analyze a depth map, which may be obtained from a range sensor capturing depth information of a scene, to identify where an object is located in the scene. Accordingly, a region of interest may be identified in the scene which includes the object, and image data may be selectively obtained of the region of interest, rather than of the entire scene containing the object. This image data may be acquired by an image sensor configured for capturing visible light information of the scene. By only selectively obtaining the image data within the region of interest, rather than all of the image data, improvements may be realized in the computational complexity of a possible further processing of the image data, the storage of the image data and/or the transmission of the image data.
Optical imaging lens group
The present disclosure discloses an optical imaging lens group including, sequentially from an object side to an image side along an optical axis, a first lens, a second lens, a third lens, a fourth lens, a fifth lens, a sixth lens and a seventh lens having reactive power. The first lens has positive refractive power, a convex object-side surface, and a concave image-side surface. A total effective focal length f of the optical imaging lens group and half of a maximal field-of-view Semi-FOV of the optical imaging lens group satisfy: f*tan(Semi-FOV)>7.5 mm. A distance TTL along the optical axis from the object-side surface of the first lens to an imaging plane of the optical imaging lens group and half of a diagonal length ImgH of an effective pixel area on the imaging plane of the optical imaging lens group satisfy: TTL/ImgH<1.3.
PHOTOELECTRIC CONVERSION APPARATUS, IMAGE CAPTURING APPARATUS, EQUIPMENT, AND METHOD OF DRIVING PHOTOELECTRIC CONVERSION APPARATUS
A photoelectric conversion apparatus includes a driving unit and a plurality of pixels. The pixel includes a first photoelectric conversion unit, a second photoelectric conversion unit, a charge-voltage conversion unit, a first transfer transistor, a second transfer transistor, a reset transistor, a microlens configured to condense incident light to the first photoelectric conversion unit and the second photoelectric conversion unit, and an output unit. The driving unit performs a first operation including a first reset operation and a first readout operation, and a second operation including a second reset operation and a second readout operation.
SENSOR WITH LOW POWER SYNCHRONOUS READOUT
Various implementations disclosed herein include devices, systems, and methods that buffer events in device memory during synchronous readout of a plurality of frames by a sensor. Various implementations disclosed herein include devices, systems, and methods that disable a sensor communication link until the buffered events are sufficient for transmission by the sensor. In some implementations, the sensor using a synchronous readout may select a readout mode for one or more frames based on how many of the pixels are detecting events. In some implementations, a first mode that reads out only data for a low percentage of pixels that have events uses the device memory and a second mode bypasses the device memory based on accumulation criteria such as high percentage of pixels detecting events. In the second mode, less data per pixel may be readout.
Imaging Method and System Based on Wise-pixels with Valved Modulation
This disclosure presents a novel smart CMOS imaging sensor and the methods and system for imaging of an object using the smart CMOS imaging sensor. A CMOS-implemented 3D imaging system compromises a wise-pixels-containing imaging sensor and a scanning light point or beam to achieve 3D shape reconstruction, by recording performance of each wise-pixel to the incident light over the period of “valve modulation”. The “valve modulation” is a one-time process of accumulation and release of charges. A frame period comprises multiple valve modulations. In the “frame period”, each wise-pixel will repeat the process that temporarily stores the light intensity, and then release, along with a selection of preferred intensity (e.g. the globally maximum intensity, or the locally maximum intensities, and or the intensities above a certain threshold) during the whole frame period, and the selected intensity and the corresponding time will be exported to the computing units. The selection of the different preferred light intensities is implemented by memory-based, threshold-based, and difference-based approaches, respectively. The obtained maximum intensity and time information can be used to reconstruct 3D geometric information of the surface of the object scanned by moving light source.
BLOCK OPERATIONS FOR AN IMAGE PROCESSOR HAVING A TWO-DIMENSIONAL EXECUTION LANE ARRAY AND A TWO-DIMENSIONAL SHIFT REGISTER
A method is described that includes, on an image processor having a two dimensional execution lane array and a two dimensional shift register array, repeatedly shifting first content of multiple rows or columns of the two dimensional shift register array and repeatedly executing at least one instruction between shifts that operates on the shifted first content and/or second content that is resident in respective locations of the two dimensional shift register array that the shifted first content has been shifted into.
SOLID-STATE IMAGING DEVICE
In a solid-state imaging device, a first substrate has a plurality of pixels and a plurality of first control signal lines. The plurality of first control signal lines are connected to pixels of each row. The second substrate includes a plurality of second control signal lines and a control circuit. The arrangement of each of the plurality of second control signal lines on the second substrate corresponds to the arrangement of a corresponding one of the plurality of first control signal lines on the first substrate. The connection portion has a plurality of control connections and a plurality of readout connections. Each of the plurality of control connections is connected to one of the plurality of first control signal lines and a corresponding one of the plurality of second control signal lines.