H04N13/261

3D DISPLAY SYSTEM AND 3D DISPLAY METHOD
20230071576 · 2023-03-09 · ·

A 3D display system and a 3D display method are provided. The 3D display system includes a 3D display, a memory and one or more processors. The memory records a plurality of modules, and the processor accesses and executes the modules recorded by the memory. The modules include a bridge interface module and a 3D display service module. When an application is executed by the processor, the bridge interface module creates a virtual extend screen, and moves the application to the virtual extend screen. The bridge interface module obtains a 2D content frame of the application from the virtual extend screen by a screenshot function. The 3D display service module converts the 2D content frame into a 3D format frame by communicating with a third-party software development kit, and provides the 3D format frame to the 3D display for displaying.

3D DISPLAY SYSTEM AND 3D DISPLAY METHOD
20230071576 · 2023-03-09 · ·

A 3D display system and a 3D display method are provided. The 3D display system includes a 3D display, a memory and one or more processors. The memory records a plurality of modules, and the processor accesses and executes the modules recorded by the memory. The modules include a bridge interface module and a 3D display service module. When an application is executed by the processor, the bridge interface module creates a virtual extend screen, and moves the application to the virtual extend screen. The bridge interface module obtains a 2D content frame of the application from the virtual extend screen by a screenshot function. The 3D display service module converts the 2D content frame into a 3D format frame by communicating with a third-party software development kit, and provides the 3D format frame to the 3D display for displaying.

Multifocal plane based method to produce stereoscopic viewpoints in a DIBR system (MFP-DIBR)
11477434 · 2022-10-18 · ·

Some embodiments of an example method may include receiving an input image with depth information; mapping the input image to a set of focal plane images; orienting the set of focal plane images using head orientation information to provide stereo disparity between left and right eyes; and displaying the oriented set of focal plane images. Some embodiments of another example method may include: receiving a description of three-dimensional (3D) content; receiving, from a tracker, information indicating motion of a viewer relative to a real-world environment; responsive to receiving the information indicating motion of the viewer, synthesizing motion parallax by altering multi-focal planes of the 3D content; and rendering an image to the multi-focal plane display using the altered multi-focal plane rendering.

Multifocal plane based method to produce stereoscopic viewpoints in a DIBR system (MFP-DIBR)
11477434 · 2022-10-18 · ·

Some embodiments of an example method may include receiving an input image with depth information; mapping the input image to a set of focal plane images; orienting the set of focal plane images using head orientation information to provide stereo disparity between left and right eyes; and displaying the oriented set of focal plane images. Some embodiments of another example method may include: receiving a description of three-dimensional (3D) content; receiving, from a tracker, information indicating motion of a viewer relative to a real-world environment; responsive to receiving the information indicating motion of the viewer, synthesizing motion parallax by altering multi-focal planes of the 3D content; and rendering an image to the multi-focal plane display using the altered multi-focal plane rendering.

METHOD AND ELECTRONIC DEVICE FOR DETERMINING BOUNDARY OF REGION OF INTEREST

An image processing apparatus includes at least one memory configured to store instructions, and at least one processor configured to execute the instructions to obtain a first image captured by a first image sensor, obtain a second image captured by a second image sensor, which is located in a different position from a position of the first image sensor, determine a rough region of interest (RROI) in the first image, determine a geometric transformation that maps a position of an RROI of the second image corresponding to the RROI of the first image to a position of the RROI of the first image, and determine a boundary of a region of interest (ROI) in the first image corresponding to the RROI of the first image, based on the geometric transformation.

METHOD AND ELECTRONIC DEVICE FOR DETERMINING BOUNDARY OF REGION OF INTEREST

An image processing apparatus includes at least one memory configured to store instructions, and at least one processor configured to execute the instructions to obtain a first image captured by a first image sensor, obtain a second image captured by a second image sensor, which is located in a different position from a position of the first image sensor, determine a rough region of interest (RROI) in the first image, determine a geometric transformation that maps a position of an RROI of the second image corresponding to the RROI of the first image to a position of the RROI of the first image, and determine a boundary of a region of interest (ROI) in the first image corresponding to the RROI of the first image, based on the geometric transformation.

THREE-DIMENSIONALIZATION METHOD AND APPARATUS FOR TWO-DIMENSIONAL IMAGE, DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
20230113902 · 2023-04-13 ·

This application provides a three-dimensionalization method and apparatus for a two-dimensional image, an electronic device, and a computer-readable storage medium. The method includes performing depth perception processing on a two-dimensional image, to obtain a depth value of each pixel in the two-dimensional image; performing migration processing on the two-dimensional image from multiple perspectives, to obtain a migration result of the two-dimensional image corresponding to each perspective; determining a color value of each pixel in a migration image corresponding to each perspective, based on the depth value of each pixel in the two-dimensional image and the migration result of the two-dimensional image corresponding to each perspective; generating, based on the color value of each pixel in the migration image of each perspective, the migration image corresponding to the perspective; and encapsulating the migration images of the multiple perspectives in an order, to obtain a three-dimensional video.

IMAGE PROCESSING APPARATUS AND METHOD, AND PROGRAM
20230156170 · 2023-05-18 ·

The present technology relates to an image processing apparatus and method and a program that enable natural representation of light in an image in accordance with a viewpoint. The image processing apparatus calculates information indicating a change in a light source region between an input image and a viewpoint-converted image that is obtained by performing viewpoint conversion on the input image on the basis of a specified viewpoint, and causes a change in representation of light in the viewpoint-converted image on the basis of the calculated information indicating a change in the light source region. The present technology can be applied to an image display system that generates a pseudo stereoscopic image with motion parallax from one image.

Systems and methods for self-supervised depth estimation according to an arbitrary camera

System, methods, and other embodiments described herein relate to improving depth estimates for monocular images using a neural camera model that is independent of a camera type. In one embodiment, a method includes receiving a monocular image from a pair of training images derived from a monocular video. The method includes generating, using a ray surface network, a ray surface that approximates an image character of the monocular image as produced by a camera having the camera type. The method includes creating a synthesized image according to at least the ray surface and a depth map associated with the monocular image.

Systems and methods for self-supervised depth estimation according to an arbitrary camera

System, methods, and other embodiments described herein relate to improving depth estimates for monocular images using a neural camera model that is independent of a camera type. In one embodiment, a method includes receiving a monocular image from a pair of training images derived from a monocular video. The method includes generating, using a ray surface network, a ray surface that approximates an image character of the monocular image as produced by a camera having the camera type. The method includes creating a synthesized image according to at least the ray surface and a depth map associated with the monocular image.