Patent classifications
H04N19/547
Sphere projected motion estimation/compensation and mode decision
Techniques are disclosed for coding video data predictively based on predictions made from spherical-domain projections of input pictures to be coded and reference pictures that are prediction candidates. Spherical projection of an input picture and the candidate reference pictures may be generated. Thereafter, a search may be conducted for a match between the spherical-domain representation of a pixel block to be coded and a spherical-domain representation of the reference picture. On a match, an offset may be determined between the spherical-domain representation of the pixel block to a matching portion of the of the reference picture in the spherical-domain representation. The spherical-domain offset may be transformed to a motion vector in a source-domain representation of the input picture, and the pixel block may be coded predictively with reference to a source-domain representation of the matching portion of the reference picture.
Sphere projected motion estimation/compensation and mode decision
Techniques are disclosed for coding video data predictively based on predictions made from spherical-domain projections of input pictures to be coded and reference pictures that are prediction candidates. Spherical projection of an input picture and the candidate reference pictures may be generated. Thereafter, a search may be conducted for a match between the spherical-domain representation of a pixel block to be coded and a spherical-domain representation of the reference picture. On a match, an offset may be determined between the spherical-domain representation of the pixel block to a matching portion of the of the reference picture in the spherical-domain representation. The spherical-domain offset may be transformed to a motion vector in a source-domain representation of the input picture, and the pixel block may be coded predictively with reference to a source-domain representation of the matching portion of the reference picture.
Accelerated video exportation to multiple destinations
Systems and methods described herein provide a new mechanism of video exportation which ensures that the process is done faster and that a single video can be exported to two or more destination at the same time. This document explains the steps involved in the creation of the video, processes involved in encoding, rendering, transmission/exportation, and playing the video. Figures are used in explaining or illustrating the flow of processes and showing the different devices used in accomplishing various activities in the exporting processes. The application will receive commands to perform the exporting from the destination. Overall, the application will be able to facilitate faster exportation of a video, almost twice the basic speed of the known video exportation systems and to multiple destinations unlike in exportation by the basic applications in use today.
Accelerated video exportation to multiple destinations
Systems and methods described herein provide a new mechanism of video exportation which ensures that the process is done faster and that a single video can be exported to two or more destination at the same time. This document explains the steps involved in the creation of the video, processes involved in encoding, rendering, transmission/exportation, and playing the video. Figures are used in explaining or illustrating the flow of processes and showing the different devices used in accomplishing various activities in the exporting processes. The application will receive commands to perform the exporting from the destination. Overall, the application will be able to facilitate faster exportation of a video, almost twice the basic speed of the known video exportation systems and to multiple destinations unlike in exportation by the basic applications in use today.
Method and apparatus for interframe point cloud attribute coding
A method of interframe point cloud attribute coding is performed by at least one processor and includes obtaining, as a motion estimation unreliability measure of motion estimation of a target frame, a value inversely proportional to a ratio of a number of first point cloud samples of the target frame respectively with second point cloud samples of an interframe reference frame, to a number of point cloud samples in the target frame. The method further includes identifying whether the obtained motion estimation unreliability measure is greater than a predetermined threshold, based on the obtained motion estimation unreliability measure being identified to be greater than the predetermined threshold, skipping motion compensation of the target frame, and based on the obtained motion estimation unreliability measure being identified to be less than or equal to the predetermined threshold, performing the motion compensation of the target frame.
Method and apparatus for interframe point cloud attribute coding
A method of interframe point cloud attribute coding is performed by at least one processor and includes obtaining, as a motion estimation unreliability measure of motion estimation of a target frame, a value inversely proportional to a ratio of a number of first point cloud samples of the target frame respectively with second point cloud samples of an interframe reference frame, to a number of point cloud samples in the target frame. The method further includes identifying whether the obtained motion estimation unreliability measure is greater than a predetermined threshold, based on the obtained motion estimation unreliability measure being identified to be greater than the predetermined threshold, skipping motion compensation of the target frame, and based on the obtained motion estimation unreliability measure being identified to be less than or equal to the predetermined threshold, performing the motion compensation of the target frame.
METHOD AND APPARATUS FOR COMPENSATING MOTION FOR A HOLOGRAPHIC VIDEO STREAM
The invention pertains to a computer-implemented method for compensating motion for a digital holographic video stream, the method comprising: obtaining (1010) a sequence of frames representing consecutive holographic images of a scenery; obtaining (1020) translation and rotation vectors describing a relative motion of at least one object in said scenery between a pair of frames from among said sequence of frames; and applying (1030) an affine canonical transform to a first frame of said pair of frames so as to obtain a predicted frame, said affine canonical transform representing said translation and rotation vectors. The invention also pertains to a computer program product and to an apparatus for compensating motion for a digital holographic video stream.
Image processing device and method
The present invention relates to an image processing device and method, which realize improvement in encoding efficiency for color difference signals and reduction in address calculations for memory access. In a case where a block size of orthogonal transform is 44, and a macroblock of luminance signals is configured of four 44 pixel blocks appended with 0 through 1, the four luminance signal blocks are corresponded with one color difference signal 44 block appended with C. At this time, there exist four motion vector information of mv.sub.0, mv.sub.1, mv.sub.2, and mv.sub.3, as to the four luminance signal blocks. The motion vector information mv.sub.c of the one color difference signal 44 block is calculated by averaging processing using these four motion vector information. The present invention can be applied to an image encoding device which performed encoding based on the H.264/AVC format, for example.
Image processing device and method
The present invention relates to an image processing device and method, which realize improvement in encoding efficiency for color difference signals and reduction in address calculations for memory access. In a case where a block size of orthogonal transform is 44, and a macroblock of luminance signals is configured of four 44 pixel blocks appended with 0 through 1, the four luminance signal blocks are corresponded with one color difference signal 44 block appended with C. At this time, there exist four motion vector information of mv.sub.0, mv.sub.1, mv.sub.2, and mv.sub.3, as to the four luminance signal blocks. The motion vector information mv.sub.c of the one color difference signal 44 block is calculated by averaging processing using these four motion vector information. The present invention can be applied to an image encoding device which performed encoding based on the H.264/AVC format, for example.
Image Compression Method and Apparatus
An image parallel compression method includes dividing data obtained after a discrete cosine transform (DCT) is performed on raw image data or data obtained after Huffman decoding is performed on image data of a joint photographic experts group (JPEG) format, or the like into several sub-blocks on a block basis, and then performing parallel operations such as intra-frame prediction, and arithmetic coding, to implement image parallel compression.