H04N2213/005

METHOD FOR TRANSMITTING AND RECEIVING 360-DEGREE VIDEO INCLUDING CAMERA LENS INFORMATION, AND DEVICE THEREFOR
20210195163 · 2021-06-24 ·

A method for processing 360-degree image data by a device for receiving a 360-degree video according to the present invention comprises the steps of: receiving 360-degree image data; obtaining information relating to an encoded picture and metadata from the 360-degree image data, wherein the metadata includes camera lens information; decoding a picture including a target circular area on the basis of the information relating to the encoded picture; and rendering the target circular area by processing same on the basis of the camera lens information.

NEAR-EYE DISPLAY APPARATUS AND METHOD OF DISPLAYING THREE-DIMENSIONAL IMAGES

A near-eye display apparatus for displaying a three-dimensional image to a user. The apparatus includes an image projecting means to project pairs of images associated with different cross-sectional planes of the three-dimensional image; at least one optical display arrangement including a plurality of optical elements, wherein each of the plurality of optical elements is operable to be switched between a first optical state and a second optical state; a control arrangement that is operable to control the at least one optical display arrangement to separately switch each optical element from the first optical state to the second optical state, and the image projecting means to project a separate pair of images on each optical element in the second optical state; and at least one optical device that allows display of the three-dimensional image to each eye of the user.

System and process for detecting, tracking and counting human objects of interest

A system is disclosed that includes: at least one image capturing device at the entrance to obtain images; a reader device; and a processor for extracting objects of interest from the images and generating tracks for each object of interest, and for matching objects of interest with objects associated with RFID tags, and for counting the number of objects of interest associated with, and not associated with, particular RFID tags.

MULTI-PERSPECTIVE DISPLAY DRIVER
20210014474 · 2021-01-14 ·

Described examples include an integrated circuit having depth fusion engine circuitry configured to receive stereoscopic image data and, in response to the received stereoscopic image data, generate at least: first and second focal perspective images for viewing by a first eye at multiple focal distances; and third and fourth focal perspective images for viewing by a second eye at multiple focal distances. The integrated circuit further includes display driver circuitry coupled to the depth fusion engine circuitry and configured to drive a display device for displaying at least the first, second, third and fourth focal perspective images.

IMAGE PROCESSING DEVICE, CONTENT PROCESSING DEVICE, CONTENT PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD

In a depth image compressing section of an image processing device, a depth image operation section generates a depth image by operation using photographed stereo images. A difference image obtaining section generates a difference image between an actually measured depth image and the computed depth image. In a depth image decompressing section of a content processing device, a depth image operation section generates a depth image by operation using the transmitted stereo images. A difference image adding section restores a depth image by adding the computed depth image to the transmitted difference image.

Multi-frame range gating for lighting-invariant depth maps for in-motion applications and attenuating environments

Range gated imaging systems utilize short light pulses and synchronized detector cycles to image objects that are contained within a distance range. Single-image range gating allows depth estimation for static scenes, but this technique is not well-suited for in-motion applications or attenuating environments. In various embodiments, a multi-frame range gating system uses a series of range gate frames that are produced within a brief timing window, whereby each range-gate frame illuminates a different lighting-invariant depth range within the field of view. Multi-frame range gate cycles of as little as three frames are sufficient to produce precise lighting-invariant depth information for each pixel in a sensor for in-motion applications. Lighting-invariant depth information is produced with multi-frame range gate cycles of as little as four frames for in-motion applications for attenuating environments like fog, rain, dust, smoke, smog, and underwater.

Method and circuit of assigning selected depth values to RGB subpixels and recovering selected depth values from RGB subpixels for colored depth frame packing and depacking
10869058 · 2020-12-15 · ·

A method comprises: obtaining two depth values from each of a first pixel depth value and a fourth pixel depth value, and obtaining one depth value from each of a second pixel depth value and a third pixel depth value; and assigning the two depth values obtained from the first pixel depth value to the R-subpixel and B-subpixel values of the first pixel, assigning the depth value obtained from the second pixel depth value to the R-subpixel, G-subpixel and B-subpixel values of the second pixel, assigning the depth value obtained from the third pixel depth value to the R-subpixel, G-subpixel and B-subpixel values of the third pixel, and assigning the two depth values obtained from the fourth pixel depth value to the G-subpixel value of the first pixel and the R-subpixel, G-subpixel and B-subpixel values of the fourth pixel.

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
20200359007 · 2020-11-12 ·

[Overview] [Problem to be Solved] To provide an information processing apparatus and an information processing method. [Solution] An information processing apparatus including: a receiving unit that receives a request including load information regarding a load; and a sending unit that sends a data set in accordance with the request. The data set includes three-dimensional shape data, and left-eye texture data and right-eye texture data. The three-dimensional shape data has a vertex count corresponding to the load information. The left-eye texture data and the right-eye texture data correspond to the three-dimensional shape data.

Privacy image generation

A privacy image generation system may use a light field camera that includes an array of cameras or an RGBZ camera(s)) is used to capture images and display images according to a selected privacy mode. The privacy mode may include a blur background mode that can be automatically selected based on the meeting type, participants, location, and device type. A region of interest and/or an object(s) of interest (e.g. one or more persons in a foreground) is determined and the privacy image generation system is configured to clearly show the region/object of interest and obscure or replace the background by combining multiple images. The displayed image includes the region/object(s) of interest clearly shown (e.g. in focus) and any objects in a background of the combined image shown having a limited depth of field (e.g. blurry/not in focus) and/or blurred due to the combination of the multiple images.

Spatially adaptive video compression for multiple streams of color and depth
10757410 · 2020-08-25 · ·

Techniques of compressing color video images include computing a delta quantization parameter (QP) for the color images based on a similarity between the depth image surface normal and the view direction associated with a color image. For example, upon receiving a frame having an image with multiple color and depth images, a computer finds a depth image that is closest in orientation to a color image. For each pixel of that depth image, the computer generates a blend weight based on an orientation of a normal to a position of the depth image and the viewpoints from which the plurality of color images were captured. The computer then generates a value of QP based on the blend weight and determines a macroblock of color image corresponding to the position, the macroblock being associated with the value of QP for the pixel.