Patent classifications
H04N19/27
Fast and accurate block matching for computer generated content
A set of software applications configured to perform interframe and/or intraframe encoding operations based on data communicated between a graphics application and a graphics processor. The graphics application transmits a 3D model to the graphics processor to be rendered into a 2D frame of video data. The graphics application also transmits graphics commands to the graphics processor indicating specific transformations to be applied to the 3D model as well as textures that should be mapped onto portions of the 3D model. Based on these transformations, an interframe module can determine blocks of pixels that repeat across sequential frames. Based on the mapped textures, an intraframe module can determine blocks of pixels that repeat within an individual frame. A codec encodes the frames of video data into compressed form based on blocks of pixels that repeat across frames or within frames.
Mixed noise and fine texture synthesis in lossy image compression
An encoder and/or a computer implemented encoding method includes a texture module configured to determine texture data associated with texture of an image, a noise module configured to determine noise data based on the texture data, a synthesis module configured to generate spatial spectral characteristics of the noise, and combine at least one of the noise data, the texture data, and the spatial spectral characteristics of the noise based on at least one border between adjacent textures, and an encoding module configured to compress the image using an image compression codec.
Mixed noise and fine texture synthesis in lossy image compression
An encoder and/or a computer implemented encoding method includes a texture module configured to determine texture data associated with texture of an image, a noise module configured to determine noise data based on the texture data, a synthesis module configured to generate spatial spectral characteristics of the noise, and combine at least one of the noise data, the texture data, and the spatial spectral characteristics of the noise based on at least one border between adjacent textures, and an encoding module configured to compress the image using an image compression codec.
System and method for visual rendering based on sparse samples with predicted motion
The present teaching relates to method, system, medium, and implementations for rendering a moving object. An object data package related to a moving object appearing in a monitored scene with respect to a first time instance is first received and features characterizing the moving object at the first time instance are extracted from the package, that are estimated at a monitoring rate and include a current position of the object and a current motion vector at the first time instance. Information associated with a previously rendered object at a previously rendered position at a previous time instance is retrieved and a next rendering position of the object is determined based on the current position, the current motion vector, and a rendering rate lower than the monitoring rate. The object is rendered at the next rendering position based on a motion vector and the information associated with the previously rendered object.
System and method for visual rendering based on sparse samples with predicted motion
The present teaching relates to method, system, medium, and implementations for rendering a moving object. An object data package related to a moving object appearing in a monitored scene with respect to a first time instance is first received and features characterizing the moving object at the first time instance are extracted from the package, that are estimated at a monitoring rate and include a current position of the object and a current motion vector at the first time instance. Information associated with a previously rendered object at a previously rendered position at a previous time instance is retrieved and a next rendering position of the object is determined based on the current position, the current motion vector, and a rendering rate lower than the monitoring rate. The object is rendered at the next rendering position based on a motion vector and the information associated with the previously rendered object.
Video transmitting device and video playing device
Provided is an apparatus may be provided for transmitting a moving image including an alpha channel. The apparatus may include an object region extractor configured to extract an object region from the moving image, a color frame generator configured to generate a color frame corresponding to the extracted object region, an alpha channel frame generator configured to generate an alpha channel frame corresponding to the object region based on the alpha channel included in the moving image, a synthesizer configured to generate a synthesized frame by synthesizing the color frame and the alpha channel frame; an encoder configured to encode the synthesized frame; and a transmitter configured to transmit the encoded synthesized frame to a reproducing device.
Video transmitting device and video playing device
Provided is an apparatus may be provided for transmitting a moving image including an alpha channel. The apparatus may include an object region extractor configured to extract an object region from the moving image, a color frame generator configured to generate a color frame corresponding to the extracted object region, an alpha channel frame generator configured to generate an alpha channel frame corresponding to the object region based on the alpha channel included in the moving image, a synthesizer configured to generate a synthesized frame by synthesizing the color frame and the alpha channel frame; an encoder configured to encode the synthesized frame; and a transmitter configured to transmit the encoded synthesized frame to a reproducing device.
Visual element encoding parameter tuning
Techniques are described for adaptive encoding of different visual elements in a video frame. Characteristics of visual elements can be determined and used to set encoding parameters for the visual elements. The visual elements can be encoded such that one visual element is encoded differently than another visual element if they have different characteristics.
Generating And Displaying A Video Stream
An encoder system and computer-implemented method may be provided for generating a video stream for a streaming client. The system and method may determine a part of the video which is or would be occluded during display of the video by the streaming client, for example on the basis of signaling data received from the streaming client. A video stream may be generated by, before or as part of encoding of the video, omitting the part of the video, or replacing video data in the part by replacement video data having a lower entropy than said video data. The video stream may be provided to the streaming client, for example via a network. Accordingly, a better compressible version of the video may be obtained, which when displayed by the streaming client, may still contain all or most non-occluded parts visible to a user.
TECHNIQUE FOR RECORDING AUGMENTED REALITY DATA
Disclosed is an improved approach for generated recordings from augmented reality systems from the perspective of a camera within the system. Instead of re-using rendered virtual content from the perspective of the user's eyes for AR recordings, additional virtual content is rendered from an additional perspective specifically for the AR recording. That additional virtual content is combined with image frames generated by a camera to form the AR recording.