Patent classifications
H04N19/553
Image processing method and apparatus
An image processing method includes obtaining multiple video frames, where the multiple video frames are collected from a same scene at different angles and determining a depth map of each video frame according to corresponding pixels among the multiple video frames; supplementing background missing regions of the multiple video frames according to depth maps of the multiple video frames, to obtain supplemented video frames of the multiple video frames and depth maps of the multiple supplemented video frames. The method also includes generating an alpha image of each video frame according to an occlusion relationship between each of the multiple video frames and a supplemented video frame of each video frame in a background missing region and generating a browsing frame at a specified browsing angle according to the multiple video frames, the supplemented video frames of the multiple video frames, and alpha images of the multiple video frames.
Method and system for video frame interpolation based on optical flow method
A method and system for video frame interpolation based on an optical flow method is disclosed. The process includes calculating bidirectional motion vectors between two adjacent frames in a frame sequence of input video by using the optical flow method, judging reliabilities of the bidirectional motion vectors between the two adjacent frames, and processing a jagged problem and a noise problem in the optical flow method; marking shielding and exposure regions in the two adjacent frames, and updating an unreliable motion vector; with regard to the two adjacent frames, according to marking information about the shielding and exposure regions and the bidirectional motion vector field, mapping front and back frames to an interpolated frame to obtain a forward interpolated frame and a backward interpolated frame; synthesizing the forward interpolated frame and the backward interpolated frame into the interpolated frame; repairing a hole point in the interpolated frame to obtain a final interpolated frame. Since the optical flow method is based on pixels, disclosed method and system are more accurate and do not have blocking effect and other problems.
Method and system for video frame interpolation based on optical flow method
A method and system for video frame interpolation based on an optical flow method is disclosed. The process includes calculating bidirectional motion vectors between two adjacent frames in a frame sequence of input video by using the optical flow method, judging reliabilities of the bidirectional motion vectors between the two adjacent frames, and processing a jagged problem and a noise problem in the optical flow method; marking shielding and exposure regions in the two adjacent frames, and updating an unreliable motion vector; with regard to the two adjacent frames, according to marking information about the shielding and exposure regions and the bidirectional motion vector field, mapping front and back frames to an interpolated frame to obtain a forward interpolated frame and a backward interpolated frame; synthesizing the forward interpolated frame and the backward interpolated frame into the interpolated frame; repairing a hole point in the interpolated frame to obtain a final interpolated frame. Since the optical flow method is based on pixels, disclosed method and system are more accurate and do not have blocking effect and other problems.
MULTI-VIEW CODING WITH EFFECTIVE HANDLING OF RENDERABLE PORTIONS
A proposed intermediate way of handling the renderable portion of the first view results in more efficient coding. Instead of omitting the coding of the renderable portion completely, even more efficient coding of multi-view signals entails merely suppressing the coding of the residual signal within the renderable portion, whereas the prediction parameter coding still takes place from the non-renderable portion of the multi-view signal across the renderable portion so that prediction parameters for the renderable portion may be exploited for predicting parameters for the non-renderable portion. The additional coding rate for transmitting the prediction parameters for the renderable portion may be kept low as this merely aims at forming a continuation of the parameter history across the renderable portion to serve as a basis for prediction parameters of other portions of the multi-view signal.
MULTI-VIEW CODING WITH EFFECTIVE HANDLING OF RENDERABLE PORTIONS
A proposed intermediate way of handling the renderable portion of the first view results in more efficient coding. Instead of omitting the coding of the renderable portion completely, even more efficient coding of multi-view signals entails merely suppressing the coding of the residual signal within the renderable portion, whereas the prediction parameter coding still takes place from the non-renderable portion of the multi-view signal across the renderable portion so that prediction parameters for the renderable portion may be exploited for predicting parameters for the non-renderable portion. The additional coding rate for transmitting the prediction parameters for the renderable portion may be kept low as this merely aims at forming a continuation of the parameter history across the renderable portion to serve as a basis for prediction parameters of other portions of the multi-view signal.
IMAGE MOTION COMPENSATION DEVICE AND METHOD
An image motion compensation device includes: a motion vector information processing circuit, generating an image interpolation phase and a motion vector status according to motion vector information of a front image and a rear image; a cache memory circuit, allocates first and second memory spaces to respectively store first-range pixels of the front image and second-range pixels of the rear image read from an external memory circuit; a memory allocation control circuit, generating an allocation control signal according to the image interpolation phase and the motion vector status to control the cache memory circuit to dynamically allocate sizes of the first and second memory spaces; and an image motion compensation circuit, generating, based on the first-range and second-range pixels, an interpolation image corresponding to the image interpolation phase according to the motion vector information and the allocation control signal.
Image Processing Method and Apparatus
An image processing method includes obtaining multiple video frames, where the multiple video frames are collected from a same scene at different angles and determining a depth map of each video frame according to corresponding pixels among the multiple video frames; supplementing background missing regions of the multiple video frames according to depth maps of the multiple video frames, to obtain supplemented video frames of the multiple video frames and depth maps of the multiple supplemented video frames. The method also includes generating an alpha image of each video frame according to an occlusion relationship between each of the multiple video frames and a supplemented video frame of each video frame in a background missing region and generating a browsing frame at a specified browsing angle according to the multiple video frames, the supplemented video frames of the multiple video frames, and alpha images of the multiple video frames.
Image Processing Method and Apparatus
An image processing method includes obtaining multiple video frames, where the multiple video frames are collected from a same scene at different angles and determining a depth map of each video frame according to corresponding pixels among the multiple video frames; supplementing background missing regions of the multiple video frames according to depth maps of the multiple video frames, to obtain supplemented video frames of the multiple video frames and depth maps of the multiple supplemented video frames. The method also includes generating an alpha image of each video frame according to an occlusion relationship between each of the multiple video frames and a supplemented video frame of each video frame in a background missing region and generating a browsing frame at a specified browsing angle according to the multiple video frames, the supplemented video frames of the multiple video frames, and alpha images of the multiple video frames.
Multi-view coding with effective handling of renderable portions
A proposed intermediate way of handling the renderable portion of the first view results in more efficient coding. Instead of omitting the coding of the renderable portion completely, even more efficient coding of multi-view signals entails merely suppressing the coding of the residual signal within the renderable portion, whereas the prediction parameter coding still takes place from the non-renderable portion of the multi-view signal across the renderable portion so that prediction parameters for the renderable portion may be exploited for predicting parameters for the non-renderable portion. The additional coding rate for transmitting the prediction parameters for the renderable portion may be kept low as this merely aims at forming a continuation of the parameter history across the renderable portion to serve as a basis for prediction parameters of other portions of the multi-view signal. Expressed differently, the prediction parameters for the renderable portion need not perfectly predict the texture within the renderable portion of the first view to keep the residual signal within the renderable portion low.
Multi-view coding with effective handling of renderable portions
A proposed intermediate way of handling the renderable portion of the first view results in more efficient coding. Instead of omitting the coding of the renderable portion completely, even more efficient coding of multi-view signals entails merely suppressing the coding of the residual signal within the renderable portion, whereas the prediction parameter coding still takes place from the non-renderable portion of the multi-view signal across the renderable portion so that prediction parameters for the renderable portion may be exploited for predicting parameters for the non-renderable portion. The additional coding rate for transmitting the prediction parameters for the renderable portion may be kept low as this merely aims at forming a continuation of the parameter history across the renderable portion to serve as a basis for prediction parameters of other portions of the multi-view signal. Expressed differently, the prediction parameters for the renderable portion need not perfectly predict the texture within the renderable portion of the first view to keep the residual signal within the renderable portion low.