G06T15/205

IMAGE PROCESSING SYSTEM AND METHOD FOR GENERATING A SUPER-RESOLUTION IMAGE
20230237616 · 2023-07-27 ·

The present application discloses an image processing system. The image processing system comprises a first processing unit and a memory. The first processing unit receives a three-dimensional scene comprising a plurality of objects, generates a depth map according to distances between the objects and a viewpoint, renders a normal-resolution image of the scene observed from the viewpoint according to the depth map, appends depth information to the normal-resolution image to generate a normal-resolution image layer, and outputs the normal-resolution image layer. The normal-resolution image layer comprises three color channels and one alpha channel, in which color values of each of pixels of the normal-resolution image are stored in the three color channels of the normal-resolution image layer, and first depth values of the pixels of the normal-resolution image are stored in the alpha channel of the normal-resolution image layer. The memory stores the normal-resolution image layer.

SYSTEM AND METHOD FOR DYNAMIC IMAGES VIRTUALISATION
20230022344 · 2023-01-26 ·

A dynamic image virtualization system and method configured to utilize an AI model in order to conduct a reduced latency real-time prediction process upon at least one input image, wherein said prediction process is designated to create free-viewpoint 3D extrapolated output dynamic images tailored in advance to the preferences or needs of a user and comprising more visual data than the at least one input image.

MULTI-VIEW NEURAL HUMAN RENDERING
20230027234 · 2023-01-26 ·

An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.

DISPLAY DEVICE AND DISPLAY METHOD
20230023455 · 2023-01-26 ·

In accordance with an embodiment, a display device includes a display unit performing a control for displaying a display image related to a predetermined object as a two-dimensional image in a virtual space; and a parameter acquisition unit acquiring a first parameter related to a viewpoint in the virtual space and a second parameter defining a change in the predetermined object, wherein the display unit changes an inclination of the two-dimensional image in the virtual space based on the first parameter and performs the control for displaying the display image related to the predetermined object based on the first parameter and the second parameter.

REALITY VS VIRTUAL REALITY RACING

A method for displaying a virtual vehicle includes: calculating a virtual world comprising the virtual vehicle and a representation of a physical object at a virtual position; calculating a virtual position of a point of view within the virtual world based on a position of the point of view at the racecourse; and calculating a portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view, wherein the portion of the virtual vehicle visible from the virtual position of the point of view comprises a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by the representation of the physical object at the virtual position of the physical object.

Scalable FOV+ for VR 360 video delivery to remote end users

A distribution device for delivering a selected viewport stream of virtual reality (VR) data to each of a plurality of client devices, comprising a processor configured for receiving a plurality of extended viewport streams of a VR video file each comprising a sequence of extended field of view (EFOV) frames created for a respective one of a plurality of overlapping segments constituting a sphere defined in the VR video file and delivering a selected one of the plurality of extended viewport streams to each of a plurality of client devices by performing the following for each of the client devices in each of a plurality of iterations: (1) receiving a current orientation data of the respective client device; (2) selecting one of the plurality of extended viewport streams according to the current orientation data; and (3) transmitting the selected extended viewport stream to the respective client device.

Surface aware lens

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for rendering three-dimensional virtual objects within real-world environments. Virtual rendering of a three-dimensional virtual object can be altered appropriately as a user moves around the object in the real-world through utilization of a redundant tracking system comprising multiple tracking sub-systems. Virtual object rendering can be with respect to a reference surface in a real-world three-dimensional space depicted in a camera view of a mobile computing device.

Image generation apparatus, image generation method, and program

Provided is an image generation technology capable of suppressing unpleasantness associated with fluctuation in image quality caused by the viewer's viewpoint movement. A synthesis ratio determination unit of an image generation apparatus includes: an image quality evaluation index calculation unit that generates a synthesis image J.sub.1 from an observation viewpoint image I.sub.1 and an intermediate viewpoint image I.sub.2, and a synthesis image J.sub.3 from an observation viewpoint image I.sub.3 and an intermediate viewpoint image I.sub.2, for a plurality of synthesis ratios, calculates an image quality evaluation index in an observation viewpoint V.sub.1, an image quality evaluation index in an intermediate viewpoint V.sub.2, and an image quality evaluation index in an observation viewpoint V.sub.3 by using the synthesis images J.sub.1 and J.sub.3, and calculates a variation v of an image quality evaluation index by using the image quality evaluation index in the observation viewpoint V.sub.1, the image quality evaluation index in the intermediate viewpoint V.sub.2, and the image quality evaluation index in the observation viewpoint V.sub.3; and an image quality evaluation index comparison unit that determines a synthesis ratio A based on the variation v of the image quality evaluation index.

Multiview video encoding and decoding method

The present disclosure provides a multi-view image decoding method including: obtaining, from a bitstream, three-dimensional geometry information indicating a three-dimensional space of a multi-view image, view independent component information indicating a view independent component, which is uniformly applied to every view, and view dependent component information indicating a view dependent component, which is differently applied according to views; determining a view dependent component for a texture map of a current view from the view dependent component information; generating the texture map of the current view from a view independent component of the view independent component information and the determined view dependent component for the current view; and reconstructing a current view image according to a three-dimensional space that is constructed according to the texture map of the current view and the three-dimensional geometry information.

Depth map re-projection on user electronic devices

A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.