H04N13/232

Image processing apparatus and image processing method for aligning polarized images based on a depth map and acquiring a polarization characteristic using the aligned polarized images

A depth map generation unit generates a depth map from images obtained by picking up a subject at a plurality of viewpoint positions by an image pickup unit. On the basis of the depth map generated by the depth map generation unit, an alignment unit aligns polarized images obtained by the image pickup unit picking up the subject at the plurality of viewpoint positions through polarizing filters in different polarization direction at the different viewpoint positions. A polarization characteristic acquisition unit acquires a polarization characteristic of the subject from a desired viewpoint position by using the polarized images aligned by the alignment unit to obtain the high-precision polarization characteristic with little degradation in temporal resolution and spatial resolution. It becomes possible to acquire the polarization characteristic of the subject at the desired viewpoint position.

Imaging device and operating method thereof
11206346 · 2021-12-21 · ·

An imaging device including a pixel matrix and a processor is provided. The pixel matrix includes a plurality of phase detection pixels and a plurality of regular pixels. The processor performs autofocusing according to pixel data of the phase detection pixels, and determines an operating resolution of the regular pixels according to autofocused pixel data of the phase detection pixels, wherein the phase detection pixels are always-on pixels and the regular pixels are selectively turned on after the autofocusing is accomplished.

Imaging device and operating method thereof
11206346 · 2021-12-21 · ·

An imaging device including a pixel matrix and a processor is provided. The pixel matrix includes a plurality of phase detection pixels and a plurality of regular pixels. The processor performs autofocusing according to pixel data of the phase detection pixels, and determines an operating resolution of the regular pixels according to autofocused pixel data of the phase detection pixels, wherein the phase detection pixels are always-on pixels and the regular pixels are selectively turned on after the autofocusing is accomplished.

Systems and Methods for Producing a Light Field from a Depth Map

A system includes an electronic display, a computer processor, one or more memory units, and a module stored in the one or more memory units. The module is configured to access a source image stored in the one or more memory units and determine depth data for each pixel of a plurality of pixels of the source image. The module is further configured to map, using the plurality of pixels and the determined depth data for each of the plurality of pixels, the source image to a four-dimensional light field. The module is further configured to send instructions to the electronic display to display the mapped four-dimensional light field.

Systems and methods for virtual light field expansion with electro-optical tessellation
11196976 · 2021-12-07 · ·

Some implementations of the disclosure are directed to tessellating a light field into a size or depth that is larger or further extended than the pupil size of an imaging system or display system. In some implementations, a display system comprises: a display configured to emit light corresponding to an image; a first optical component positioned in front of the display, the first optical component configured to pass the light to an orthogonal field evolving cavity (OFEC) at a plurality of different angles; the OFEC, wherein the OFEC comprises a plurality of reflectors that are configured to reflect the light passed at the plurality of different angles to tessellate the size of the image to form a tessellated image; and a second optical component optically coupled to the OFEC, the second optical component configured to relay the tessellated image through an exit pupil of the display system.

Methods and Apparatus for Supporting Content Generation, Transmission and/or Playback
20220191452 · 2022-06-16 ·

Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which maybe viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions.

Methods and Apparatus for Supporting Content Generation, Transmission and/or Playback
20220191452 · 2022-06-16 ·

Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which maybe viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions.

SYSTEM AND METHOD FOR MULTIPLEXED RENDERING OF LIGHT FIELDS
20220174332 · 2022-06-02 ·

An example method in accordance with some embodiments may include: receiving a media manifest file identifying a plurality of representations of a multi-view video, at least a first representation of the plurality of representations comprising a first sub-sampling of views and at least a second representation of the plurality of representations comprising a second sub-sampling of views different from the first sub-sampling of views; selecting a selected representation from the plurality of representations; retrieving the selected representation; and rendering the selected representation.

THREE-DIMENSIONAL LIGHT FIELD TECHNOLOGY-BASED OPTICAL UNMANNED AERIAL VEHICLE MONITORING SYSTEM
20220172380 · 2022-06-02 ·

Disclosed is a light field technology-based unmanned aerial vehicle monitoring system. Said unmanned aerial vehicle monitoring system comprises: a first camera, configured to continuously obtain image information in a monitored area; a second camera, the second camera being a light field camera including a compound eye lens, and being configured to obtain, when it is determined that the obtained image information is of an unmanned aerial vehicle, light field information of the unmanned aerial vehicle; a vertical rotating platform and a horizontal rotating platform arranged perpendicular to each other, wherein the first camera and the second camera can rotate synchronously under the control of the vertical rotating platform and the horizontal rotating platform; and a computer processor, configured to calculate depth information of the unmanned aerial vehicle by means of the obtained light field information so as to obtain the position of the unmanned aerial vehicle. The three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system provided in the present invention can isolate vibration in a monitoring process, thereby improving the efficiency and accuracy during the monitoring or detection of an unmanned aerial vehicle.

CAPTURING AND ALIGNING PANORAMIC IMAGE AND DEPTH DATA

This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.