H04N13/236

SOLID-STATE IMAGING DEVICE AND ELECTRONIC CAMERA
20230171517 · 2023-06-01 · ·

A solid-state imaging device includes a second image sensor having an organic photoelectric conversion film transmitting a specific light, and a first image sensor which is stacked in layers on a same semiconductor substrate as that of the second image sensor and which receives the specific light having transmitted the second image sensor, in which a pixel for focus detection is provided in the second image sensor or the first image sensor. Therefore, an AF method can be realized independently of a pixel for imaging.

SOLID-STATE IMAGING DEVICE AND ELECTRONIC CAMERA
20230171517 · 2023-06-01 · ·

A solid-state imaging device includes a second image sensor having an organic photoelectric conversion film transmitting a specific light, and a first image sensor which is stacked in layers on a same semiconductor substrate as that of the second image sensor and which receives the specific light having transmitted the second image sensor, in which a pixel for focus detection is provided in the second image sensor or the first image sensor. Therefore, an AF method can be realized independently of a pixel for imaging.

Variable imaging arrangements and methods therefor

Various approaches to imaging involve selecting directional and spatial resolution. According to an example embodiment, images are computed using an imaging arrangement to facilitate selective directional and spatial aspects of the detection and processing of light data. Light passed through a main lens is directed to photosensors via a plurality of microlenses. The separation between the microlenses and photosensors is set to facilitate directional and/or spatial resolution in recorded light data, and facilitating refocusing power and/or image resolution in images computed from the recorded light data. In one implementation, the separation is varied between zero and one focal length of the microlenses to respectively facilitate spatial and directional resolution (with increasing directional resolution, hence refocusing power, as the separation approaches one focal length).

Multi-aperture ranging devices and methods

Embodiments of systems and methods for multi-aperture ranging are disclosed. An embodiment of a device includes a main lens, configured to receive an image from the field of view of the main lens, a multi-aperture optical component having optical elements optically coupled to the main lens and configured to create a multi-aperture image set that includes a plurality of subaperture images, wherein at least one point in the field of view is captured by at least two of the subaperture images, an array of sensing elements, a ROIC configured to receive the signals, to convert the signals to digital data, and to output the digital data, and an image processing system, responsive to the digital data that is output from the ROIC, which is configured to generate disparity values that correspond to at least one point in common between the at least two subaperture images.

MULTI-APERTURE RANGING DEVICES AND METHODS

Embodiments of systems and methods for multi-aperture ranging are disclosed. An embodiment of a device includes a main lens, configured to receive an image from the field of view of the main lens, a multi-aperture optical component having optical elements optically coupled to the main lens and configured to create a multi-aperture image set that includes a plurality of subaperture images, wherein at least one point in the field of view is captured by at least two of the subaperture images, an array of sensing elements, a ROIC configured to receive the signals, to convert the signals to digital data, and to output the digital data, and an image processing system, responsive to the digital data that is output from the ROIC, which is configured to generate disparity values that correspond to at least one point in common between the at least two subaperture images.

Enhanced optical detection and ranging

In an embodiment, a method includes, for each field of view of a plurality of fields of view forming a field of regard, positioning a rotating disk in a first position corresponding to a first section of a plurality of sections. Each section of the plurality of sections may have a different focal length. The method further includes receiving a first image representing a first field of view, analyzing the first image, adjusting the plurality of mirrors based on the analysis, positioning the rotating disk in a second position corresponding to a second section, and receiving a second image representing the first field of view captured while the rotating disk was in the second position. The method further includes generating a range image of the field of view using at least the first image and the second image, and determining a range to a target using the range image.

IMAGING DEVICE, IMAGING METHOD, AND IMAGE PROCESSING PROGRAM
20170237967 · 2017-08-17 · ·

The imaging device includes a multiple-property lens that includes a first area having a first property and a second area having a second property different from the first property, an image sensor in which a first light receiving element 25A having a first microlens and a second light receiving element 25B having a second microlens having a different image forming magnification from the first microlens are two-dimensionally arranged, and a crosstalk removal processing unit that removes a crosstalk component from each of a first crosstalk image acquired from the first light receiving element 25A of the image sensor and a second crosstalk image acquired from the second light receiving element to generate a first image and a second image respectively having the first property and the second property of the multiple-property lens.

Plenoptic camera for mobile devices

A plenoptic camera for mobile devices is provided, having a main lens, a microlens array, an image sensor, and a first reflective element configured to reflect the light rays captured by the plenoptic camera before arriving at the image sensor, in order to fold the optical path of the light captured by the camera before impinging the image sensor. Additional reflective elements may also be used to further fold the light path inside the camera. The reflective elements can be prisms, mirrors or reflective surfaces of three-sided optical elements having two refractive surfaces that form a lens element of the main lens. By equipping mobile devices with this plenoptic camera, the focal length can be greatly increased while maintaining the thickness of the mobile device under current constraints.

IMAGE RECORDING AND 3D INFORMATION ACQUISITION

Two or more images are taken wherein during the image taking a focal sweep is performed. The exposure intensity is modulated during the focal sweep and done so differently for the images. This modulation provides for a watermarking of depth information in the images. The difference in exposure during the sweep watermarks the depth information differently in the images. By comparing the images a depth map for the images can be calculated. A camera system has a lens and a sensor and a means for performing a focal sweep and means for modulating the exposure intensity during the focal sweep. Modulating the exposure intensity can be done by modulating a light source or the focal sweep or modulating the transparency of a transparent medium in the light path.

Methods and computer program products for calibrating stereo imaging systems by using a planar mirror
20170256042 · 2017-09-07 ·

A method is provided for calibrating a stereo imaging system by using at least one camera and a planar mirror. The method involves obtaining at least two images with the camera, each of the images being captured from a different camera position and containing the mirror view of the camera and a mirror view of an object, thereby obtaining multiple views of the object. The method further involves finding the center of the picture of the camera in each of the images, obtaining a relative focal length of the camera, determining an aspect ratio in each of the images, determining the mirror plane equation in the coordinate system of the camera, defining an up-vector in the mirror's plane, selecting a reference point in the mirror's plane, determining the coordinate transformation from the coordinate system of the image capturing camera into the mirror coordinate system, and determining a coordinate transformation.