H04N13/268

VIRTUAL CONTENT EXPERIENCE SYSTEM AND CONTROL METHOD FOR SAME
20230052104 · 2023-02-16 · ·

Disclosed is a virtual content experience system. In the virtual content experience system, a central server for driving the system contains: a content conversion unit which converts two-dimensional image content, received by means of a data transmission and reception unit or input by a user, into a stereoscopic image; a motion information generation unit which recognizes text information extracted from the two-dimensional image content and converts the text information into motion information; a content playback control unit which is provided to transmit the motion information to a motion information management unit provided in a virtual reality experience chair, or receive start information and end information about the motion information from the motion information management unit to generate and change control information for controlling whether to provide new two-dimensional image content; and a display unit for displaying the content conversion unit, and the motion information or control information.

OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC APPARATUS, OPHTHALMIC INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
20230050680 · 2023-02-16 · ·

An ophthalmic information processing apparatus includes a specifying unit and an image deforming unit. The specifying unit is configured to specify a three-dimensional position of each pixel in a two-dimensional front image depicting a predetermined site of a subject's eye, based on OCT data obtained by performing optical coherence tomography on the predetermined site. The image deforming unit is configured to deform the two-dimensional front image, by changing position of at least one pixel in the two-dimensional front image based on the three-dimensional position, to generate a three-dimensional front image.

AUTOMATIC THREE-DIMENSIONAL PRESENTATION FOR HYBRID MEETINGS
20230236543 · 2023-07-27 ·

Systems and methods are directed to automatically generating a three-dimensional (3D) holographic presentation from a two-dimensional (2D) slide presentation. A network system receives an indication to generate the 3D holographic presentation, which causes automatic generation of the 3D holographic presentation by the network system. In response to receiving the indication, the network system accesses the 2D slide presentation from a user device associated with a presenter and accesses, from a mapping database, a plurality of mappings that indicate how to convert elements of each slide of the 2D slide presentation into a 3D format. The network system then transforms elements of each slide from a 2D format into the 3D format based on the plurality of mappings. The 3D holographic presentation is generated from the transformed elements by blending the transformed elements with a background and/or real-world image data captured by an image capture device.

Encoding apparatus and encoding method, decoding apparatus and decoding method
11716487 · 2023-08-01 · ·

There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.

Encoding apparatus and encoding method, decoding apparatus and decoding method
11716487 · 2023-08-01 · ·

There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.

Self-supervised training of a depth estimation model using depth hints

A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.

Self-supervised training of a depth estimation model using depth hints

A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.

Methods and systems for reprojection in augmented-reality displays
11704883 · 2023-07-18 · ·

Methods and systems are provided for a reprojection engine for augmented-reality devices. The augmented-reality device projects virtual content within a real-world environment. The augmented-reality device tracks a six degrees of freedom headpose of the augmented-reality device, depth information of the virtual content, motion vectors that correspond to movement of the virtual content, and a color buffer for a reprojection engine. The reprojection engine generates a reprojection of the virtual content defined by an extrapolation of a first frame using the headpose, the depth information, motion vectors, and the color surface data structure. The reprojected virtual content continues to appear as if positioned with the real-world environment regardless of changes in the headpose of the augmented-reality device or motion of the virtual content.

Rendering for multi-focus display systems
11543655 · 2023-01-03 · ·

Some implementations provide a multi-focus display system that renders images at multiple focus distances for display in conjunction with the use of appropriately powered lenses. For example, an HMD may include a fast switching lens element that allows quickly alternating between two or more focus distances. The displayed images are configured to correspond to the alternating focus distances by adjusting a high-frequency part of the images. This can provide a more natural user experience that will include near objects that require the user's eye to focus on a close focal depth plane and far objects that require the user's eye to focus on a far focal depth plane. Moreover, the user experience can be provided with little or no loss of brightness and without requiring processor and resource intensive computations.

Rendering for multi-focus display systems
11543655 · 2023-01-03 · ·

Some implementations provide a multi-focus display system that renders images at multiple focus distances for display in conjunction with the use of appropriately powered lenses. For example, an HMD may include a fast switching lens element that allows quickly alternating between two or more focus distances. The displayed images are configured to correspond to the alternating focus distances by adjusting a high-frequency part of the images. This can provide a more natural user experience that will include near objects that require the user's eye to focus on a close focal depth plane and far objects that require the user's eye to focus on a far focal depth plane. Moreover, the user experience can be provided with little or no loss of brightness and without requiring processor and resource intensive computations.