H04N13/139

WEARABLE ELECTRONIC DEVICE AND METHOD OF OUTPUTTING THREE-DIMENSIONAL IMAGE
20220360764 · 2022-11-10 ·

A wearable electronic device includes a left-eye display configured to output light of a first color corresponding to a 3D left-eye image, a right-eye display configured to output light of a second color corresponding to a 3D right-eye image, a left-eye optical waveguide configured to adjust a path of the light of the first color and output the light of the first color, a right-eye optical waveguide configured to adjust a path of the light of the second color and output the light of the second color, a left-eye display control circuit configured to supply a driving power and a control signal to the left-eye display, a right-eye display control circuit configured to supply a driving power and a control signal to the right-eye display, a communication module configured to communicate with a mobile electronic device, and a second control circuit configured to supply a driving power and a control signal to the communication module.

Overlay processing method in 360 video system, and device thereof

A 360 image data processing method performed by a 360 video receiving device, according to the present invention, comprises the steps of: receiving 360 image data; acquiring information and metadata on an encoded picture from the 360 image data; decoding the picture on the basis of the information on the encoded picture; and rendering the decoded picture and an overlay on the basis of the metadata, wherein the metadata includes overlay-related metadata, the overlay is rendered on the basis of the overlay-related metadata, and the overlay-related metadata includes information on a region of the overlay.

Method and apparatus for processing 360-degree image

A communication technique for merging, with an IoT technology, a 5G communication system for supporting a data transmission rate higher than that of a 4G system is provided. The communication technique can be applied to an intelligent service (for example, smart home, smart building, smart city, smart car or connected car, health care, digital education, retail business, and security and safety-related services, and the like) on the basis of a 5G communication technology and an IoT-related technology. A method for processing a 360-degree image is provided. The method includes determining a three-dimensional (3D) model for mapping a 360-degree image; determining a partition size for the 360-degree image; determining a rotational angle for each of the x, y, and z axes of the 360-degree image; determining an interpolation method to be applied when mapping the 360-degree image to a two-dimensional (2D) image; and converting the 360-degree image into the 2D image.

Method and apparatus for processing 360-degree image

A communication technique for merging, with an IoT technology, a 5G communication system for supporting a data transmission rate higher than that of a 4G system is provided. The communication technique can be applied to an intelligent service (for example, smart home, smart building, smart city, smart car or connected car, health care, digital education, retail business, and security and safety-related services, and the like) on the basis of a 5G communication technology and an IoT-related technology. A method for processing a 360-degree image is provided. The method includes determining a three-dimensional (3D) model for mapping a 360-degree image; determining a partition size for the 360-degree image; determining a rotational angle for each of the x, y, and z axes of the 360-degree image; determining an interpolation method to be applied when mapping the 360-degree image to a two-dimensional (2D) image; and converting the 360-degree image into the 2D image.

Non-uniform stereo rendering

Examples of the disclosure describe systems and methods for recording augmented reality and mixed reality experiences. In an example method, an image of a real environment is received via a camera of a wearable head device. A pose of the wearable head device is estimated, and a first image of a virtual environment is generated based on the pose. A second image of the virtual environment is generated based on the pose, wherein the second image of the virtual environment comprises a larger field of view than a field of view of the first image of the virtual environment. A combined image is generated based on the second image of the virtual environment and the image of the real environment.

CODING SCHEME FOR IMMERSIVE VIDEO WITH ASYMMETRIC DOWN-SAMPLING AND MACHINE LEARNING
20220345756 · 2022-10-27 ·

Methods of encoding and decoding immersive video are provided. In an encoding method, source video data comprising a plurality of source views is encoded into a video bitstream. At least one of the source views is down-sampled prior to encoding. A metadata bitstream associated with the video stream comprises metadata describing a configuration of the down-sampling, to assist a decoder to decode the video bitstream. It is believed that the use of down-sampled views may help to reduce coding artifacts, compared with a patch-based encoding approach. Also provided are an encoder and a decoder for immersive video, and an immersive video bitstream.

User interface module for converting a standard 2D display device into an interactive 3D display device

A 2D/3D conversion interface component is configured to override the video processing capabilities associated with a conventional 2D display, re-formatting an incoming 3D video stream into a version compatible with a 2D display while preserving the 3D-type of presentation. An incoming “side-by-side” (SBS) 3D video stream is re-formatted into a “frame sequential” (serialized) format that appears as a conventional video stream input to the 2D display. The interface component also generates as an output a timing signal (synchronized with the converted frames) that is transmitted to a 3D viewing device (e.g., glasses). Therefore, as along as the 3D viewing device remains synchronized with the sequence of frames shown on the 2D display, the user will actually be viewing an interactive 3D video.

User interface module for converting a standard 2D display device into an interactive 3D display device

A 2D/3D conversion interface component is configured to override the video processing capabilities associated with a conventional 2D display, re-formatting an incoming 3D video stream into a version compatible with a 2D display while preserving the 3D-type of presentation. An incoming “side-by-side” (SBS) 3D video stream is re-formatted into a “frame sequential” (serialized) format that appears as a conventional video stream input to the 2D display. The interface component also generates as an output a timing signal (synchronized with the converted frames) that is transmitted to a 3D viewing device (e.g., glasses). Therefore, as along as the 3D viewing device remains synchronized with the sequence of frames shown on the 2D display, the user will actually be viewing an interactive 3D video.

3D DISPLAY SYSTEM AND 3D DISPLAY METHOD

A 3D display system and a 3D display method are provided. The 3D display system includes a 3D display, a memory, and a processor. The processor is coupled to the 3D display and the memory and is configured to execute the following steps. As a first type application program is executed, an image content of the first type application program is captured, and a stereo format image is generated according to the image content of the first type application program. The stereo format image is delivered to a runtime complying with a specific development standard through an application program interface complying with the specific development standard. A display frame processing associated with the 3D display is performed on the stereo format image through the runtime, and a 3D display image content generated by the display frame processing is provided to the 3D display for displaying.

3D DISPLAY SYSTEM AND 3D DISPLAY METHOD

A 3D display system and a 3D display method are provided. The 3D display system includes a 3D display, a memory, and a processor. The processor is coupled to the 3D display and the memory and is configured to execute the following steps. As a first type application program is executed, an image content of the first type application program is captured, and a stereo format image is generated according to the image content of the first type application program. The stereo format image is delivered to a runtime complying with a specific development standard through an application program interface complying with the specific development standard. A display frame processing associated with the 3D display is performed on the stereo format image through the runtime, and a 3D display image content generated by the display frame processing is provided to the 3D display for displaying.