Patent classifications
H04N19/29
POINT CLOUD COMPRESSION USING OCCUPANCY NETWORKS
Occupancy networks enable efficient and flexible point cloud compression. In addition to the voxel-based representation, occupancy networks are able to handle points, meshes, or projected images of 3D objects, making them very flexible in terms of input signal representation. The probability of occupancy of positions is estimated using occupancy networks instead of sparse convolutional neural networks. A compression implementation using occupancy network enables scalability with infinite reconstruction resolution.
POINT CLOUD COMPRESSION USING OCCUPANCY NETWORKS
Occupancy networks enable efficient and flexible point cloud compression. In addition to the voxel-based representation, occupancy networks are able to handle points, meshes, or projected images of 3D objects, making them very flexible in terms of input signal representation. The probability of occupancy of positions is estimated using occupancy networks instead of sparse convolutional neural networks. A compression implementation using occupancy network enables scalability with infinite reconstruction resolution.
Split Rendering To Improve Tolerance To Delay Variation In Extended Reality Applications With Remote Rendering
An improved split rendering process of the present disclosure mitigates the impact of delay variation in XR application with remote rendering applications. A visual scene is split rendered to generate graphic layers from 3D objects in the visual scene. The server node groups and sorts the graphic layers based on QoE importance to create graphic layer groups, encodes each graphic layer group into a composite video frame and appends metadata to the composite video frame. The encoded video frame is then transmitted in sorted order based on quality rank to a client device (e.g., an HMD worn by a user) where the video frame is decoded and displayed. The client device further sends feedback to the server indicating the graphic layer groups that were timely received.
Sub-picture Position Constraints In Video Coding
A video coding mechanism is disclosed. The mechanism includes receiving a bitstream comprising a plurality of sub-pictures partitioned from a picture such that a union of the sub-pictures covers a total area of the picture without overlap. The bitstream is parsed to obtain the one or more sub-pictures. The one or more sub-pictures are decoded to create a video sequence. The video sequence is forwarded for display.
Adjustable modulation coding scheme to increase video stream robustness
Systems, apparatuses, and methods for utilizing different modulation coding schemes (MCSs) for different components of a video stream are disclosed. A system includes a transmitter sending a video stream over a wireless link to a receiver. The transmitter splits the video stream into low, medium, and high quality components, and then the transmitter modulates the different components using different MCS's. For example, the transmitter modulates the low quality component using a lower, robust MCS level to increase the likelihood that this component is received. Also, the medium quality component is modulated using a medium MCS level and the high frequency component is modulated using a higher MCS level. If only the low quality component is received by the receiver, then the receiver reconstructs and displays a low quality video frame from this component, which avoids a glitch in the display of the video stream.
Adjustable modulation coding scheme to increase video stream robustness
Systems, apparatuses, and methods for utilizing different modulation coding schemes (MCSs) for different components of a video stream are disclosed. A system includes a transmitter sending a video stream over a wireless link to a receiver. The transmitter splits the video stream into low, medium, and high quality components, and then the transmitter modulates the different components using different MCS's. For example, the transmitter modulates the low quality component using a lower, robust MCS level to increase the likelihood that this component is received. Also, the medium quality component is modulated using a medium MCS level and the high frequency component is modulated using a higher MCS level. If only the low quality component is received by the receiver, then the receiver reconstructs and displays a low quality video frame from this component, which avoids a glitch in the display of the video stream.
System and method of video encoding with data chunk
Techniques for encoding image data are discussed. Image data containing multiple image frames can be received from an image sensor associated with a vehicle. A first image frame and a second image frame may be associated with a first data chunk. A first metadata associated with the first image frame can be identified. Additionally, a second metadata associated with the second image frame can be identified. The first data chunk can be created that includes the first image frame, the second image frame, the first metadata, and the second metadata. The first data chunk can be stored in a video container, where the video container can be indexed to access the first image frame with the first metadata or the second image frame with the second metadata.
System and method of video encoding with data chunk
Techniques for encoding image data are discussed. Image data containing multiple image frames can be received from an image sensor associated with a vehicle. A first image frame and a second image frame may be associated with a first data chunk. A first metadata associated with the first image frame can be identified. Additionally, a second metadata associated with the second image frame can be identified. The first data chunk can be created that includes the first image frame, the second image frame, the first metadata, and the second metadata. The first data chunk can be stored in a video container, where the video container can be indexed to access the first image frame with the first metadata or the second image frame with the second metadata.
ENCODER AND DECODER, ENCODING METHOD AND DECODING METHOD FOR REFERENCE PICTURE RESAMPLING EXTENSIONS
A video decoder (151) for decoding an encoded video signal comprising encoded picture data to reconstruct a plurality of pictures of a video sequence of a video. The video decoder (151) comprises an input interface (160) configured for receiving the encoded video signal comprising the encoded picture data. Moreover, the video decoder (151) comprises a data decoder (170) configured for reconstructing the plurality of pictures of the video sequence depending on the encoded picture data. Moreover, further video decoders, video encoders, systems, methods for encoding and decoding, computer programs and encoded video signals according to embodiments are provided.
APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR OMNIDIRECTIONAL VIDEO
There are disclosed various methods, apparatuses and computer program products for video encoding and decoding. In some embodiments the method for video encoding comprises obtaining compressed volumetric video data representing a three-dimensional scene or object (71); capsulating the compressed volumetric video data into a data structure (72); obtaining data of a two-dimensional projection of at least a part of the three-dimensional scene as seen from a certain viewport (73); and including the data of the two-dimensional projection into the data structure (74).