Patent classifications
H04N13/178
Apparatus, a method and a computer program for volumetric video
There are disclosed various methods, apparatuses and computer program products for volumetric video encoding and decoding. In some embodiments of a method for encoding, obtaining one or more patches formed from a three-dimensional image information are obtained. The one or more patches represent projection data of at least a part of an object to a projection plane. Priority for at least one of the one or more patches is determined and the one or more patches are projected to a projection plane. Indication of the priority is encoded into or along a bitstream. In some embodiments of a method for decoding, one or more encoded patches formed from a three-dimensional image information are received. Also at least one indication of priority determined for at least one of the one or more patches is received and the patches are reconstructed in the order defined by the at least one indication of priority.
Visual overlay of distance information in video feed
One embodiment provides a method, including: initiating, using at least one camera of an information handling device, a video feed; receiving marking input on an object in the video feed; identifying, using at least one distance sensor of the information handling device, distance information from the information handling device to the object; and providing, in the video feed and presented concurrently with the marking input, a visual overlay of the distance information. Other aspects are described and claimed.
Visual overlay of distance information in video feed
One embodiment provides a method, including: initiating, using at least one camera of an information handling device, a video feed; receiving marking input on an object in the video feed; identifying, using at least one distance sensor of the information handling device, distance information from the information handling device to the object; and providing, in the video feed and presented concurrently with the marking input, a visual overlay of the distance information. Other aspects are described and claimed.
INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An information processing apparatus includes: a processor; and a memory storing a program which, when executed by the processor, causes the information processing apparatus to obtain an image and correction information on a first optical system and a second optical system, the image including a first area corresponding to a first image inputted via the first optical system and a second area corresponding to a second image inputted via the second optical system having a predetermined parallax with respect to the first optical system; execute correcting processing of correcting, based on the correction information, positions of a pixel included in the first area and a pixel included in the second area in the image, and generate a processed image by executing processing of transforming the corrected first area and the corrected second area.
INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An information processing apparatus includes: a processor; and a memory storing a program which, when executed by the processor, causes the information processing apparatus to obtain an image and correction information on a first optical system and a second optical system, the image including a first area corresponding to a first image inputted via the first optical system and a second area corresponding to a second image inputted via the second optical system having a predetermined parallax with respect to the first optical system; execute correcting processing of correcting, based on the correction information, positions of a pixel included in the first area and a pixel included in the second area in the image, and generate a processed image by executing processing of transforming the corrected first area and the corrected second area.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
There is provided an image processing apparatus comprising. An obtainment unit obtains a first circular fisheye image accompanied by a first missing region in which no pixel value is present. A generation unit generates a first equidistant cylindrical projection image by performing first equidistant cylindrical transformation processing based on the first circular fisheye image. The generation unit generates the first equidistant cylindrical projection image such that a first corresponding region corresponding to the first missing region has a pixel value in the first equidistant cylindrical projection image.
Generating composite video stream for display in VR
A processor system and computer-implemented method may be provided for generating a composite video stream which may combine a background video and a foreground video stream into one stream. For that purpose, a spatially segmented encoding of the background video may be obtained, for example in the form of a tiled stream. The foreground video stream may be received, for example, from a(nother) client device. The foreground video stream may be a real-time stream, e.g., when being used in real-time communication. The image data of the foreground video stream may be inserted into the background video by decoding select segments of the background video, inserting the foreground image data into the decoded background image data of these segments, and by encoding the resulting composite image data to obtain composite segments which, together with the non-processed segments of the background video, form a spatially segmented encoding of a composite video.
Generating composite video stream for display in VR
A processor system and computer-implemented method may be provided for generating a composite video stream which may combine a background video and a foreground video stream into one stream. For that purpose, a spatially segmented encoding of the background video may be obtained, for example in the form of a tiled stream. The foreground video stream may be received, for example, from a(nother) client device. The foreground video stream may be a real-time stream, e.g., when being used in real-time communication. The image data of the foreground video stream may be inserted into the background video by decoding select segments of the background video, inserting the foreground image data into the decoded background image data of these segments, and by encoding the resulting composite image data to obtain composite segments which, together with the non-processed segments of the background video, form a spatially segmented encoding of a composite video.
Method and device for transmitting information on three-dimensional content including multiple view points
Provided is a method for transmitting metadata for omnidirectional content including a plurality of viewpoints. The method comprises identifying the metadata for the omnidirectional content including the plurality of viewpoints; and transmitting the identified metadata, wherein the metadata includes information about an identifier (ID) of a viewpoint group including at least one viewpoint of the plurality of viewpoints, and wherein the at least one viewpoint in the viewpoint group shares a common reference coordinate system.
Method and device for transmitting information on three-dimensional content including multiple view points
Provided is a method for transmitting metadata for omnidirectional content including a plurality of viewpoints. The method comprises identifying the metadata for the omnidirectional content including the plurality of viewpoints; and transmitting the identified metadata, wherein the metadata includes information about an identifier (ID) of a viewpoint group including at least one viewpoint of the plurality of viewpoints, and wherein the at least one viewpoint in the viewpoint group shares a common reference coordinate system.