H04N19/20

Live Teleporting System and Apparatus
20180007314 · 2018-01-04 ·

A method of producing a Pepper's Ghost, includes projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a colour temperature. The object is illuminated with light having a colour and intensity that results in a colour temperature of the object at least approximately matching the colour temperature of the subject in the virtual image. The subject in the virtual image has a luminance and may be illuminated with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image.

Live Teleporting System and Apparatus
20180007314 · 2018-01-04 ·

A method of producing a Pepper's Ghost, includes projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a colour temperature. The object is illuminated with light having a colour and intensity that results in a colour temperature of the object at least approximately matching the colour temperature of the subject in the virtual image. The subject in the virtual image has a luminance and may be illuminated with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image.

Point cloud compression with supplemental information messages

A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. Additionally, an encoder is configured to signal and/or a decoder is configured to receive a supplementary message comprising volumetric tiling information that maps portions of 2D image representations to objects in the point. In some embodiments, characteristics of the object may additionally be signaled using the supplementary message or additional supplementary messages.

Point cloud compression with supplemental information messages

A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. Additionally, an encoder is configured to signal and/or a decoder is configured to receive a supplementary message comprising volumetric tiling information that maps portions of 2D image representations to objects in the point. In some embodiments, characteristics of the object may additionally be signaled using the supplementary message or additional supplementary messages.

IMMERSIVE VIDEO CODING USING OBJECT METADATA
20230007277 · 2023-01-05 ·

Methods, apparatus, systems and articles of manufacture for video coding using object metadata are disclosed. An example apparatus includes an object separator to separate input views into layers associated with respective objects to generate object layers for geometry data and texture data of the input views, a pruner to project the first object layer of a first basic view of the at least one basic views against the first object layer of a first additional view of the at least one additional views to generate a first pruned view and a first pruning mask, a patch packer to tag a patch with an object identifier of the first object, the patch corresponding to the first pruning mask, and an atlas generator to generate at least one atlas to include in encoded video data, the atlas including the patch.

IMMERSIVE VIDEO CODING USING OBJECT METADATA
20230007277 · 2023-01-05 ·

Methods, apparatus, systems and articles of manufacture for video coding using object metadata are disclosed. An example apparatus includes an object separator to separate input views into layers associated with respective objects to generate object layers for geometry data and texture data of the input views, a pruner to project the first object layer of a first basic view of the at least one basic views against the first object layer of a first additional view of the at least one additional views to generate a first pruned view and a first pruning mask, a patch packer to tag a patch with an object identifier of the first object, the patch corresponding to the first pruning mask, and an atlas generator to generate at least one atlas to include in encoded video data, the atlas including the patch.

METHOD AND SYSTEM FOR OPTIMIZING IMAGE AND VIDEO COMPRESSION FOR MACHINE VISION

A method and a system described herein provide optimizing image and/or video compression for machine perception. According to an aspect, the method comprises receiving a raw image frame from a camera sensor; detecting a predefined object in the raw image frame and marking a region around the predefined object within the raw image frame as ROI. Based on the ROI, a partitioning scheme, a prediction mode, and quantization parameter are determined for improving coding efficiency. Machine perception efficiency is improved by selecting a quantization parameter table used for compressing and encoding the raw image or video frame based on a selected machine vision task. The selection of the quantization parameter table is based on training of the selected machine vision task using cost function optimization.

METHOD AND SYSTEM FOR OPTIMIZING IMAGE AND VIDEO COMPRESSION FOR MACHINE VISION

A method and a system described herein provide optimizing image and/or video compression for machine perception. According to an aspect, the method comprises receiving a raw image frame from a camera sensor; detecting a predefined object in the raw image frame and marking a region around the predefined object within the raw image frame as ROI. Based on the ROI, a partitioning scheme, a prediction mode, and quantization parameter are determined for improving coding efficiency. Machine perception efficiency is improved by selecting a quantization parameter table used for compressing and encoding the raw image or video frame based on a selected machine vision task. The selection of the quantization parameter table is based on training of the selected machine vision task using cost function optimization.

Point cloud compression using video encoding with time consistent patches

A system comprises an encoder configured to compress attribute and/or spatial information for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. In some embodiments, an encoder generates time-consistent patches for multiple version of the point cloud at multiple moments in time and uses the time-consistent patches to generate image based representations of the point cloud at the multiple moments in time.

Applications for decoder-side modeling of objects identified in decoded video data

Techniques are disclosed for coding and decoding video data using object recognition and object modeling as a basis of coding and error recovery. A video decoder may decode coded video data received from a channel. The video decoder may perform object recognition on decoded video data obtained therefrom, and, when an object is recognized in the decoded video data, the video decoder may generate a model representing the recognized object. It may store data representing the model locally. The video decoder may communicate the model data to an encoder, which may form a basis of error mitigation and recovery. The video decoder also may monitor deviation patterns in the object model and associated patterns in audio content; if/when video decoding is suspended due to operational errors, the video decoder may generate simulated video data by analyzing audio data received during the suspension period and developing video data from the data model and deviation(s) associated with patterns detected from the audio data.