Method for determining small-object region, and method and apparatus for interpolating frame between video frames
09781382 · 2017-10-03
Assignee
Inventors
Cpc classification
H04N7/0125
ELECTRICITY
H04N19/577
ELECTRICITY
H04N19/521
ELECTRICITY
H04N7/0127
ELECTRICITY
International classification
H04N7/01
ELECTRICITY
H04N19/577
ELECTRICITY
Abstract
A method and an apparatus for determining a small-object region in a video frame. The method includes dividing a current video frame into at least two regions, and determining a global motion vector corresponding to each region; determining an interframe motion vector of each group of adjacent frames in multiple video frames that include the current video frame and a reference frame of the current video frame; determining a candidate small-object region in the current video frame according to the interframe motion vector of the each group of adjacent frames and the determined global motion vector corresponding to each region; and performing filtering on the candidate small-object region in the current video frame, and determining a region obtained after the filtering as a small-object region in the current video frame.
Claims
1. A method for determining a small-object region in a video frame, comprising: dividing a current video frame into at least two regions; determining a motion vector corresponding to each region; determining an interframe motion vector of each group of two adjacent frames that comprise the current video frame and reference frames of the current video frame; determining a candidate small-object region in the current video frame according to the determined interframe motion vector of the each group of two adjacent frames and the determined motion vector corresponding to each region; and performing filtering on the candidate small-object region in the current video frame to determine a region obtained after the filtering as a small-object region in the current video frame, wherein the reference frames of the current video frame comprises one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame, wherein before determining the interframe motion vector of each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame, the method further comprises, for a to-be-processed picture block comprised in each video frame in the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame, executing the following: selecting at least one video frame from preceding N video frames of the current video frame, wherein N is a positive integer; determining, according to a small-object region determined in the preceding N video frames of the current video frame, whether a reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is a picture block comprised in the small-object region; determining that the to-be-processed picture block is a first-type to-be-processed picture block when the reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is the picture block comprised in the small-object region; and determining that the to-be-processed picture block is a second-type to-be-processed picture block when the reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is not the picture block comprised in the small-object region, wherein determining the interframe motion vector of each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame comprises: determining an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block; and using the determined interframe motion vector of each first-type to-be-processed picture block comprised in each video frame in the each group of adjacent frames and the determined interframe motion vector of each second-type to-be-processed picture block comprised in each video frame in the each group of adjacent frames as the interframe motion vector of the each group of adjacent frames, and wherein determining the interframe motion vector of each first-type to-be-processed picture block comprises: determining a value of a dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and a motion vector of a video frame in which the first-type to-be-processed picture block is located; assigning a corresponding weight to each candidate motion vector according to the determined value of the dissimilarity corresponding to each candidate motion vector by using a rule that a smaller weight is assigned to a larger value of the dissimilarity; and determining the interframe motion vector of the first-type to-be-processed picture block according to the weight assigned to each candidate motion vector and a Sum of Absolute Difference (SAD) value of pixels of a picture block pointed to by each candidate motion vector and pixels of the first-type to-be-processed picture block.
2. The method according to claim 1, wherein selecting the at least one video frame from the preceding N video frames of the current video frame comprises selecting at least one preceding continuous video frame of the current video frame.
3. The method according to claim 1, wherein determining the interframe motion vector of the first-type to-be-processed picture block comprises, for each candidate motion vector: determining a product of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and the pixels of the first-type to-be-processed picture block; and using a candidate motion vector with a smallest product as the interframe motion vector of the first-type to-be-processed picture block.
4. A method for determining a small-object region in a video frame, comprising: dividing a current video frame into at least two regions; determining a motion vector corresponding to each region; determining an interframe motion vector of each group of two adjacent frames that comprise the current video frame and reference frames of the current video frame; determining a candidate small-object region in the current video frame according to the determined interframe motion vector of the each group of two adjacent frames and the determined motion vector corresponding to each region; and performing filtering on the candidate small-object region in the current video frame to determine a region obtained after the filtering as a small-object region in the current video frame, wherein the reference frames of the current video frame comprises one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame, wherein determining the candidate small-object region in the current video frame according to the determined interframe motion vector of the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame and the determined motion vector corresponding to each region comprises: determining, in each reference frame of the current video frame and according to the interframe motion vector of the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame, a matching block corresponding to each picture block comprised in the current video frame; determining, in each reference frame, a nearby block near the matching block; determining an interframe motion vector of each nearby block determined in each reference frame; determining, for each picture block comprised in the current video frame, a value of a similarity between the interframe motion vector of each nearby block determined for the picture block and an interframe motion vector of the picture block; determining, for each picture block comprised in the current video frame, a value of a dissimilarity between the interframe motion vector and a motion vector that are of each nearby block; and determining, according to the determined value of the similarity and the determined value of the dissimilarity, a picture block comprised in the candidate small-object region in the current video frame, wherein each picture block comprised in the candidate small-object region meets the following: in multiple nearby blocks that are determined for the picture block and that are comprised in each reference frame corresponding to the current video frame, there are a first set quantity of nearby blocks whose values of similarities are all greater than or equal to a first threshold and there are a second set quantity of nearby blocks whose values of dissimilarities are all greater than or equal to a second threshold.
5. A method for determining a small-object region in a video frame, comprising: dividing a current video frame into at least two regions; determining a motion vector corresponding to each region; determining an interframe motion vector of each group of two adjacent frames that comprise the current video frame and reference frames of the current video frame; determining a candidate small-object region in the current video frame according to the determined interframe motion vector of the each group of two adjacent frames and the determined motion vector corresponding to each region; and performing filtering on the candidate small-object region in the current video frame to determine a region obtained after the filtering as a small-object region in the current video frame, wherein the reference frames of the current video frame comprises one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame, wherein before performing filtering on the candidate small-object region in the current video frame, the method further comprises marking a specific marker on each picture block comprised in the candidate small-object region, and wherein performing filtering on the candidate small-object region in the current video frame and determining the region obtained after the filtering as the small-object region in the current video frame comprises: determining, for each picture block comprised in the candidate small-object region in the current video frame, a value of a first quantity of picture blocks that are marked with the specific marker and that are in a first set range in a horizontal direction of the picture block, and a value of a second quantity of picture blocks that are marked with the specific marker and that are in a second set range in a vertical direction of the picture block; removing the specific marker of the picture block when the determined value of the first quantity or the determined value of the second quantity is greater than a third threshold; determining a value of a third quantity of picture blocks that are marked with the specific marker and that are in a third set range around the picture block; removing the specific marker of the picture block when the determined value of the third quantity is less than a fourth threshold; and determining the picture block that is marked with the specific marker and that is in the current video frame as the small-object region in the current video frame, wherein the third set range is smaller than the first set range and the second set range, and wherein the fourth threshold is less than the third threshold.
6. An apparatus for determining a small-object region in a video frame, comprising: a computer processor configured to: divide a current video frame into at least two regions; determine a global motion vector corresponding to each region; transmit the determined global motion vector; determine an interframe motion vector of each group of adjacent frames in multiple video frames that comprise the current video frame and a reference frame of the current video frame; transmit the determined interframe motion vector; determine information about a candidate small-object region in the current video frame according to the interframe motion vector that is of the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame and that is from the global motion vector that is corresponding to each region; transmit the information about the determined candidate small-object region in the current video frame; determine the candidate small-object region in the current video frame according to the information that is about the candidate small-object region in the current video frame; perform filtering on the candidate small-object region in the current video frame; and determine a region obtained after the filtering as a small-object region in the current video frame, wherein the reference frame of the current video frame comprises one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame, wherein the computer processor is further configured to execute, before the interframe motion vector of the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame is determined, the following for a to-be-processed picture block comprised in each video frame in the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame: selecting at least one video frame from preceding N video frames of the current video frame; determining, according to a small-object region determined in the preceding N video frames of the current video frame, whether a reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is a picture block comprised in the small-object region; determining that the to-be-processed picture block is a first-type to-be-processed picture block when the reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is the picture block comprised in the small-object region; determining that the to-be-processed picture block is a second-type to-be-processed picture block when the reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is not the picture block comprised in the small-object region; separately determine an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block; and use the determined interframe motion vector of each first-type to-be-processed picture block comprised in each video frame in the each group of adjacent frames and the determined interframe motion vector of each second-type to-be-processed picture block comprised in each video frame in the each group of adjacent frames as the interframe motion vector of the each group of adjacent frames, and wherein the computer processor is further configured to: determine a value of a dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and a background motion vector of a video frame in which the first-type to-be-processed picture block is located; assign a corresponding weight to each candidate motion vector according to the determined value of the dissimilarity corresponding to each candidate motion vector by using a rule that a smaller weight is assigned to a larger value of the dissimilarity; and determine the interframe motion vector of the first-type to-be-processed picture block according to the weight assigned to each candidate motion vector and a Sum of Absolute Difference (SAD) value of pixels of a picture block pointed to by each candidate motion vector and pixels of the first-type to-be-processed picture block.
7. An apparatus for determining a small-object region in a video frame, comprising: a computer processor configured to: divide a current video frame into at least two regions; determine a global motion vector corresponding to each region; transmit the determined global motion vector; determine an interframe motion vector of each group of adjacent frames in multiple video frames that comprise the current video frame and a reference frame of the current video frame; transmit the determined interframe motion vector; determine information about a candidate small-object region in the current video frame according to the interframe motion vector that is of the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame and that is from the global motion vector that is corresponding to each region; transmit the information about the determined candidate small-object region in the current video frame; determine the candidate small-object region in the current video frame according to the information that is about the candidate small-object region in the current video frame; perform filtering on the candidate small-object region in the current video frame; and determine a region obtained after the filtering as a small-object region in the current video frame, wherein the reference frame of the current video frame comprises one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame, and wherein the computer processor is further configured to: determine, in each reference frame of the current video frame and according to the interframe motion vector of the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame, a matching block corresponding to each picture block comprised in the current video frame; determine, in each reference frame, a nearby block near the matching block, and determine an interframe motion vector of each nearby block determined in each reference frame; determine, for each picture block comprised in the current video frame, a value of a similarity between the interframe motion vector of each nearby block determined for the picture block and an interframe motion vector of the picture block; determine, for each picture block comprised in the current video frame, a value of a dissimilarity between the interframe motion vector and a global motion vector that are of each nearby block; and determine, according to the determined value of the similarity and the determined value of the dissimilarity, a picture block comprised in the candidate small-object region in the current video frame, wherein each picture block comprised in the candidate small-object region meets the following: in multiple nearby blocks that are determined for the picture block and that are comprised in each reference frame corresponding to the current video frame, there are a first set quantity of nearby blocks whose values of similarities are all greater than or equal to a first threshold and there are a second set quantity of nearby blocks whose values of dissimilarities are all greater than or equal to a second threshold.
8. An apparatus for determining a small-object region in a video frame, comprising: a computer processor configured to: divide a current video frame into at least two regions; determine a global motion vector corresponding to each region; transmit the determined global motion vector; determine an interframe motion vector of each group of adjacent frames in multiple video frames that comprise the current video frame and a reference frame of the current video frame; transmit the determined interframe motion vector, determine information about a candidate small-object region in the current video frame according to the interframe motion vector that is of the each group of adjacent frames in the multiple video frames that comprise the current video frame and the reference frame of the current video frame and that is from the global motion vector that is corresponding to each region; transmit the information about the determined candidate small-object region in the current video frame; determine the candidate small-object region in the current video frame according to the information that is about the candidate small-object region in the current video frame; perform filtering on the candidate small-object region in the current video frame; and determine a region obtained after the filtering as a small-object region in the current video frame, wherein the reference frame of the current video frame comprises one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame, and wherein the computer processor is further configured to: mark, before filtering is performed on the candidate small-object region in the current video frame, a specific marker on each picture block comprised in the candidate small-object region; determine, for each picture block comprised in the candidate small-object region in the current video frame, a value of a first quantity of picture blocks that are marked with the specific marker and that are in a first set range in a horizontal direction of the picture block, and a value of a second quantity of picture blocks that are marked with the specific marker and that are in a second set range in a vertical direction of the picture block; remove the specific marker of the picture block when the determined value of the first quantity or the determined value of the second quantity is greater than a third threshold; determine a value of a third quantity of picture blocks that are marked with the specific marker and that are in a third set range around the picture block; remove the specific marker of the picture block when the determined value of the third quantity is less than a fourth threshold; and determine the picture block that is marked with the specific marker and that is in the current video frame as the small-object region in the current video frame, wherein the third set range is smaller than the first set range and the second set range, and the fourth threshold is less than the third threshold.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
DETAILED DESCRIPTION
(18) In embodiments of the present disclosure, a current video frame is divided into at least two regions, and a global motion vector corresponding to each region is determined; an interframe motion vector of each group of adjacent frames in multiple video frames that include the current video frame and a reference frame of the current video frame is determined; a candidate small-object region in the current video frame is determined according to the determined interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame and the determined global motion vector corresponding to each region; and filtering is performed on the candidate small-object region in the current video frame, and a region obtained after the filtering is determined as a small-object region in the current video frame, where the reference frame of the current video frame includes one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame. A candidate small-object region in a video frame is determined according to an interframe motion vector of video frames and a global motion vector, and filtering is performed on the candidate small-object region in the video frame to obtain a small-object region in the video frame, thereby achieving an effect of determining, in the video frame, the small-object region included in the video frame relatively accurately.
(19) In the embodiments of the present disclosure, a small-object region in a former video frame and a small-object region in a latter video frame are determined, where the former video frame and the latter video frame are two consecutively adjacent video frames; smooth filtering is performed on an interframe motion vector corresponding to a region except the small-object region in the former video frame and the small-object region in the latter video frame; and a frame is interpolated between the two consecutively adjacent video frames according to an interframe motion vector corresponding to the small-object region in the former video frame, an interframe motion vector corresponding to the small-object region in the latter video frame, and an interframe motion vector obtained after the smooth filtering. Smooth filtering is performed only on an interframe motion vector corresponding to a region except a small-object region in each of two adjacent video frames, so that a problem that an interframe motion vector of a small-object region in a video frame in two consecutively adjacent video frames is replaced by a background motion vector of this frame is eliminated, thereby improving playback quality of a high-definition video or an ultra-high-definition video.
(20) In the embodiments of the present disclosure, according to a to-be-processed picture block included in a small-object region in preceding N video frames of two consecutively adjacent video frames, to-be-processed picture blocks included in each video frame of the two consecutively adjacent video frames is classified into a first-type to-be-processed picture block and a second-type to-be-processed picture block; an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block are separately determined, and the determined interframe motion vector of each first-type to-be-processed picture block included in each video frame of the two consecutively adjacent video frames and the determined interframe motion vector of each second-type to-be-processed picture block included in each video frame of the two consecutively adjacent video frames are used as an interframe motion vector between the two consecutively adjacent video frames; and a frame is interpolated between the two consecutively adjacent video frames according to the obtained interframe motion vector between the two consecutively adjacent video frames. Smooth filtering is not performed on an interframe motion vector between two consecutively adjacent video frames, and a frame is interpolated between the two adjacent video frames directly according to an obtained interframe motion vector between the two adjacent video frames, so that absence of pixels of the small-object in the generated interpolated frame is avoided, thereby improving playback quality of a high-definition video or an ultra-high-definition video.
(21) It should be noted that, in the embodiments of the present disclosure, a video frame except an interpolated frame is a received original frame.
(22) The following further describes the embodiments of the present disclosure in detail with reference to the accompanying drawings of the specification.
(23) It should be noted that the embodiments of the present disclosure may be performed by any apparatus or system capable of playing a video.
(24) As shown in
(25) Step 101: Divide a current video frame into at least two regions, and determine a global motion vector corresponding to each region.
(26) Step 102: Determine an interframe motion vector of each group of adjacent frames in multiple video frames that include the current video frame and a reference frame of the current video frame.
(27) Step 103: Determine a candidate small-object region in the current video frame according to the determined interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame and the determined global motion vector corresponding to each region.
(28) Step 104: Perform filtering on the candidate small-object region in the current video frame, and determine a region obtained after the filtering as a small-object region in the current video frame.
(29) The reference frame of the current video frame includes one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame.
(30) Preferably, in step 101, how many regions into which the current video frame is divided may be set as needed or empirically, for example, the current video frame is empirically divided into four regions.
(31) During specific implementation, any method for determining a global motion vector in the prior art may be used to determine the global motion vector corresponding to each region in this embodiment of the present disclosure.
(32) During implementation, when a proportion of a background in a video frame to the video frame is relatively large and the video frame is divided into a relatively small quantity of regions, a global motion vector corresponding to each region of the video frame approximately equals a background motion vector corresponding to this region, and the global motion vector corresponding to each region of the video frame may be used to represent the background motion vector corresponding to this region of the video frame.
(33) Preferably, in step 102, a quantity of reference frames of the current video frame may be set as needed or empirically.
(34) For example, as shown in
(35) f.sub.n−1, f.sub.n+1, and f.sub.n+2 may be set as reference frames of f.sub.n; f.sub.n+1 may be set as a reference frame of f.sub.n; or f.sub.n−1 may be set as a reference frame of f.sub.n.
(36) During specific implementation, in step 102, for the multiple video frames that include the current video frame and the reference frame of the current video frame, when a small-object region in a video frame before the current video frame is determined, an interframe motion vector of each group of adjacent frames in some video frames of the multiple video frames has already determined.
(37) For example, as shown in
(38) When a small-object region in f.sub.n−1 is determined, reference frames of f.sub.n−1 are f.sub.n−2, f.sub.n, and f.sub.n+1, and an interframe motion vector between f.sub.n−1 and f.sub.n and an interframe motion vector between f.sub.n and f.sub.n+1 need to be determined.
(39) When a small-object region in f.sub.n is determined, reference frames of f.sub.n are f.sub.n−1, f.sub.n+1, and f.sub.n+2, and an interframe motion vector between f.sub.n−1 and f.sub.n, an interframe motion vector between f.sub.n and f.sub.n+1, and an interframe motion vector between f.sub.n+1 and f.sub.n+2 need to be determined; however, the interframe motion vector between f.sub.n−1 and f.sub.n and the interframe motion vector between f.sub.n and f.sub.n+1 are determined interframe motion vectors.
(40) During specific implementation, in step 102, in a case where an interframe motion vector of a group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame is known (that is, a determined interframe motion vector), the known interframe motion vector may be used as an interframe motion vector that is of the group of adjacent frames and that needs to be determined when the small-object region in the current video frame is determined.
(41) For example, as shown in
(42) When a small-object region in f.sub.n−1 is determined, reference frames of f.sub.n−1 are f.sub.n−2, f.sub.n, and f.sub.n+1, and an interframe motion vector between f.sub.n−1 and f.sub.n and an interframe motion vector between f.sub.n and f.sub.n+1 are determined.
(43) When a small-object region in f.sub.n is determined, reference frames of f.sub.n is f.sub.n−1, f.sub.n+1, and f.sub.n+2, and an interframe motion vector between f.sub.n−1 and f.sub.n and an interframe motion vector between f.sub.n and f.sub.n+1 are known, the interframe motion vector that is between f.sub.n−1 and f.sub.n and that is determined when the small-object region in f.sub.n−1 is determined may be used as the interframe motion vector that is between f.sub.n−1 and f.sub.n and that needs to be determined when the small-object region in f.sub.n is determined, and the interframe motion vector that is between f.sub.n and f.sub.n+1 and that is determined when the small-object region in f.sub.n−1 is determined may be used as the interframe motion vector that is between f.sub.n and f.sub.n+1 and that needs to be determined when the small-object region in f.sub.n is determined.
(44) During implementation, in a case in which an interframe motion vector of a group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame is known, the known interframe motion vector is used as an interframe motion vector that is of the group of adjacent frames and that needs to be determined when the small-object region in the current video frame is determined, so that complexity of determining the small-object region in the current video frame can be reduced.
(45) Preferably, in step 102, the determining an interframe motion vector of each group of adjacent frames in multiple video frames that include the current video frame and a reference frame of the current video frame includes performing motion estimation on the multiple video frames that include the current video frame and the reference frame of the current video frame, and determining the interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame.
(46) During specific implementation, any method for performing motion estimation on a video frame in the prior art may be used to perform the motion estimation on a video frame in this embodiment of the present disclosure.
(47) During implementation, if motion estimation is performed on a video frame corresponding to a known interframe motion vector, a newly obtained interframe motion vector of the video frame has relatively higher accuracy.
(48) During specific implementation, the interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame is an interframe motion vector of a to-be-processed picture block included in each video frame of the each group of adjacent frames, where the performing motion estimation on the multiple video frames that include the current video frame and the reference frame of the current video frame is performing the motion estimation on the to-be-processed picture block included in each video frame in the each group of adjacent frames in the multiple video frames. During specific implementation, to-be-processed picture blocks included in each video frame in the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame may be classified according to the determined small-object region in the video frame before the current video frame, and for to-be-processed picture blocks of different types, different methods are used to perform the motion estimation on the to-be-processed picture blocks, so as to determine interframe motion vectors of the to-be-processed picture blocks.
(49) Preferably, before the determining an interframe motion vector of each group of adjacent frames in multiple video frames that include the current video frame and a reference frame of the current video frame, the method further includes, for the to-be-processed picture block included in each video frame in the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame, executing the following: selecting at least one video frame from preceding N video frames of the current video frame, where N is a positive integer; determining, according to a small-object region determined in the preceding N video frames of the current video frame, whether a reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is a picture block included in the small-object region; and if yes, determining that the to-be-processed picture block is a first-type to-be-processed picture block; otherwise, determining that the to-be-processed picture block is a second-type to-be-processed picture block.
(50) Preferably, in step 102, the determining an interframe motion vector of each group of adjacent frames in multiple video frames that include the current video frame and a reference frame of the current video frame includes separately determining an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block; and using the determined interframe motion vector of each first-type to-be-processed picture block included in each video frame in the each group of adjacent frames and the determined interframe motion vector of each second-type to-be-processed picture block included in each video frame in the each group of adjacent frames as the interframe motion vector of the each group of adjacent frames.
(51) During specific implementation, the at least one video frame may be selected from the preceding N video frames of the current video frame as needed or empirically.
(52) For example, as shown in
(53) Preferably, the selecting at least one video frame from preceding N video frames of the current video frame includes selecting at least one preceding continuous video frame of the current video frame.
(54) Preferably, determining a reference picture block that is in a selected video frame and that is corresponding to the to-be-processed picture block includes determining a location of the to-be-processed picture block in a video frame that includes the to-be-processed picture block; and using a picture block in a corresponding location in the selected video frame as the reference picture block that is in the selected video frame and that is corresponding to the to-be-processed picture block.
(55) During implementation, the to-be-processed picture blocks included in each video frame in the each group of adjacent frames in the multiple video frames are classified according to the small-object region determined in the preceding N video frames of the current video frame, and for to-be-processed picture blocks of different types, different methods are used to perform the motion estimation on the to-be-processed picture blocks, so as to determine interframe motion vectors of the to-be-processed picture blocks, thereby reducing difficulty of performing the motion estimation on the current video frame and the reference frame of the current video frame, and improving precision of an interframe motion vector obtained by means of motion estimation.
(56) During specific implementation, N may be set as needed or empirically, for example, N may be set to 1 or 2.
(57) Preferably, the determining an interframe motion vector of each first-type to-be-processed picture block includes the following steps.
(58) Step A1: Determine a value of a dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and a background motion vector of a video frame in which the first-type to-be-processed picture block is located.
(59) Step A2: Assign a corresponding weight to each candidate motion vector according to the determined value of the dissimilarity corresponding to each candidate motion vector by using a rule that a smaller weight is assigned to a larger value of the dissimilarity.
(60) Step A3: Determine the interframe motion vector of the first-type to-be-processed picture block according to the weight assigned to each candidate motion vector and a SAD value of pixels of a picture block pointed to by each candidate motion vector and pixels of the first-type to-be-processed picture block.
(61) It should be noted that, the interframe motion vector of the to-be-processed picture block included in each video frame in the each group of adjacent frames is directional, that is, an interframe motion vector of a to-be-processed picture block included in a former video frame in the each group of adjacent frames is a forward motion vector between the each group of adjacent frames, and an interframe motion vector of a to-be-processed picture block included in a latter video frame in the each group of adjacent frames is a backward motion vector between the each group of adjacent frames.
(62) For example, as shown in
(63) Preferably, an implementation manner for determining each candidate motion vector corresponding to the first-type to-be-processed picture block in step A1 is similar to an implementation manner for determining each candidate motion vector corresponding to a to-be-processed picture block in the prior art, for example, a time-domain candidate motion vector or a space-domain candidate motion vector corresponding to the to-be-processed picture block is determined.
(64) Preferably, in step A1, the value of the dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and the background motion vector of the video frame in which the first-type to-be-processed picture block is located is a value used to represent a dissimilarity between each candidate motion vector and the background motion vector.
(65) During specific implementation, any value of dissimilarity that can represent a dissimilarity between each candidate motion vector and the background motion vector is applicable to the present disclosure, such as an absolute value of a difference between each candidate motion vector and the background motion vector, a difference between an absolute value of each candidate motion vector and an absolute value of the background motion vector, a sum of differences between each candidate motion vector and the background motion vector in different dimensions (for example, a dimension X and a dimension Y), or a square root value of a sum of squares of differences between each candidate motion vector and the background motion vector in different dimensions.
(66) During specific implementation, an implementation manner for determining the background motion vector of the video frame is similar to an implementation manner for determining a background motion vector of a video frame in the prior art, and details are not repeatedly described herein.
(67) Preferably, in step A3, an implementation manner for determining the SAD value of the pixels of the picture block pointed to by each candidate motion vector and the pixels of the first-type to-be-processed picture block is similar to an implementation manner for determining a SAD value of pixels of a picture block pointed to by a candidate motion vector and pixels of a to-be-processed picture block in the prior art.
(68) For example, for a candidate motion vector, a picture block pointed to by the candidate motion vector is determined, an absolute value of a difference between each pixel included in the picture block pointed to by the candidate motion vector and a corresponding pixel included in the first-type to-be-processed picture block is determined, a sum of the absolute values of the differences between the pixels is determined, the sum of the absolute values of differences is divided by a quantity of pixels in the picture block, and then a value obtained by dividing the sum of the absolute values of the differences by the quantity of pixels in the picture block is used as the SAD value of the pixels of the picture block pointed to by the candidate motion vector and the pixels of the first-type to-be-processed picture block.
(69) Preferably, in step A3, the determining the interframe motion vector of the first-type to-be-processed picture block includes, for each candidate motion vector, determining a product of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and the pixels of the first-type to-be-processed picture block, and using a candidate motion vector with a smallest product as the interframe motion vector of the first-type to-be-processed picture block.
(70) It should be noted that, there are multiple implementation manners for determining the interframe motion vector of the first-type to-be-processed picture block according to the weight corresponding to each candidate motion vector of the first-type to-be-processed picture block and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and pixels of the first-type to-be-processed picture block, for example, for each candidate motion vector corresponding to the first-type to-be-processed picture block, a sum of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and pixels of the first-type to-be-processed picture block is determined, and a candidate motion vector with a smallest sum is used as the interframe motion vector of the first-type to-be-processed picture block. Implementation manners enumerated in this embodiment of the present disclosure are merely exemplary implementation manners.
(71) Preferably, an implementation manner for determining the interframe motion vector of each second-type to-be-processed picture block is similar to an implementation manner for determining an interframe motion vector of a to-be-processed picture block in the prior art, for example, a candidate motion vector is randomly selected from multiple candidate motion vectors corresponding to the second-type to-be-processed picture block and is used as the interframe motion vector of the second-type to-be-processed picture block.
(72) Preferably, in step 103, the determining a candidate small-object region in the current video frame includes the following steps.
(73) Step B1: Determine, in each reference frame of the current video frame and according to the interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame, a matching block corresponding to each picture block included in the current video frame.
(74) During specific implementation, for each picture block included in the current video frame, implementation manners for determining matching blocks corresponding to the picture blocks are similar, and the following are detailed descriptions about an implementation manner for determining a matching block corresponding to a picture block included in the current video frame.
(75) As shown in
(76) For the reference frame f.sub.n+1 of f.sub.n, a matching block B that is in f.sub.n+1 and that is of the picture block A is determined according to an interframe motion vector mv.sub.oldF (an interframe motion vector between f.sub.n and f.sub.n+1) of the picture block A.
(77) For the reference frame f.sub.n+2 of f.sub.n, a matching block B that is in f.sub.n+1 and that is of the picture block A is determined according to the interframe motion vector mv.sub.oldF (an interframe motion vector between f.sub.n and f.sub.n+1) of the picture block A, and a matching block C that is in f.sub.n+2 and that is of the picture block A is determined according to an interframe motion vector mv.sub.newF (an interframe motion vector between f.sub.n+1 and f.sub.n+2) of the matching block B.
(78) Preferably, in step B1, when reference frames of the current video frame are different, interframe motion vectors of picture blocks included in the current video frame may be different or may be the same.
(79) For example, as shown in
(80) For the reference frame f.sub.n−1 of f.sub.n, an interframe motion vector of each picture block in f.sub.n is a backward motion vector that is of the picture block in f.sub.n and that is between f.sub.n+1 and f.sub.n.
(81) For the reference frame f.sub.n+1 of f.sub.n, an interframe motion vector of each picture block in f.sub.n is a forward motion vector that is of the picture block in f.sub.n and that is between f.sub.n and f.sub.n+1.
(82) For the reference frame f.sub.n+2 of f.sub.n, an interframe motion vector of each picture block in f.sub.n is a forward motion vector that is of the picture block in f.sub.n and that is between f.sub.n and f.sub.n+2.
(83) Step B2: Determine, in each reference frame, a nearby block near the matching block, and determine an interframe motion vector of each nearby block determined in each reference frame.
(84) Preferably, the nearby block near the matching block is a picture block located within a specific range around the matching block.
(85) During specific implementation, the specific range may be determined as needed or empirically.
(86) Step B3: For each picture block included in the current video frame, determine a value of a similarity between the interframe motion vector of each nearby block determined for the picture block and an interframe motion vector of the picture block, and determine a value of a dissimilarity between the interframe motion vector and a global motion vector that are of each nearby block.
(87) Preferably, for a nearby block determined for the picture block, the determining a value of a similarity between the interframe motion vector of the nearby block and an interframe motion vector of the picture block includes, when the interframe motion vector of the nearby block and the interframe motion vector of the picture block are forward motion vectors, determining the value of the similarity between the interframe motion vector of the nearby block and the interframe motion vector of the picture block according to a difference between the interframe motion vector of the nearby block and the interframe motion vector of the picture block; when the interframe motion vector of the nearby block and the interframe motion vector of the picture block are backward motion vectors, determining the value of the similarity between the interframe motion vector of the nearby block and the interframe motion vector of the picture block according to a difference between the interframe motion vector of the nearby block and the interframe motion vector of the picture block; when the interframe motion vector of the nearby block is a backward motion vector and the interframe motion vector of the picture block is a forward motion vector, determining the value of the similarity between the interframe motion vector of the nearby block and the interframe motion vector of the picture block according to a sum of the interframe motion vector of the nearby block and the interframe motion vector of the picture block; or when the interframe motion vector of the nearby block is a forward motion vector and the interframe motion vector of the picture block is a backward motion vector, determining the value of the similarity between the interframe motion vector of the nearby block and the interframe motion vector of the picture block according to a sum of the interframe motion vector of the nearby block and the interframe motion vector of the picture block.
(88) During implementation, a smaller sum/difference in the foregoing indicates a larger value of the similarity between the interframe motion vector of the nearby block and the interframe motion vector of the picture block.
(89) Preferably, for a nearby block determined for the picture block, the determining a value of a dissimilarity between the interframe motion vector and a global motion vector that are of the nearby block includes determining the global motion vector of the nearby block according to a region in at least two regions corresponding to the nearby block; and determining the value of the dissimilarity between the interframe motion vector of the nearby block and the global motion vector of the nearby block according to a difference between the interframe motion vector of the nearby block and the global motion vector of the nearby block.
(90) During specific implementation, the region in the at least two regions corresponding to the nearby block may be determined according to a region that is in the current video frame and in which the matching block, of the nearby block, in the current video frame is located; or the region in the at least two regions corresponding to the nearby block may be determined according to a region that is in a reference frame including the nearby block and in which the nearby block is located, which may be set as needed.
(91) During implementation, a larger difference between the interframe motion vector of the nearby block and the global motion vector of the nearby block indicates a larger value of a dissimilarity between the interframe motion vector of the nearby block and the global motion vector of the nearby block and a larger value of a dissimilarity between the interframe motion vector of the nearby block and a background motion vector of the reference frame including the nearby block.
(92) Step B4: Determine, according to the determined value of the similarity and the determined value of the dissimilarity, a picture block included in the candidate small-object region in the current video frame, where each picture block included in the candidate small-object region meets the following: in multiple nearby blocks that are determined for the picture block and that are in each reference frame corresponding to the current video frame, there are a first set quantity of nearby blocks whose values of similarities are all greater than or equal to a first threshold and there are a second set quantity of nearby blocks whose values of dissimilarities are all greater than or equal to a second threshold.
(93) Preferably, in step B4, the first set quantity, the second set quantity, the first threshold, and the second threshold may be set as needed or empirically, for example, when each picture block includes 8 pixels*8 pixels, the first set quantity may be 10, the second set quantity is 10, the first threshold is 16, and the second threshold is 16.
(94) During specific implementation, the candidate small-object region in the current video frame is an object region that is different from a background in the current video frame.
(95) Preferably, before step 104, a specific marker may be further marked on each picture block included in the candidate small-object region.
(96) Preferably, the specific marker may be set as needed or empirically, and may be set as a letter, a number, or a symbol, for example, the specific marker may be set to 1 or 0.
(97) Preferably, after the specific marker is marked on each picture block included in the candidate small-object region, the following is further included: saving the picture block marked with the specific marker.
(98) Preferably, in step 104, the performing filtering on the candidate small-object region in the current video frame, and determining a region obtained after the filtering as a small-object region in the current video frame includes the following steps.
(99) Step C1: For each picture block included in the candidate small-object region in the current video frame, determine a value of a first quantity of picture blocks that are marked with the specific marker and that are in a first set range in a horizontal direction of the picture block, and a value of a second quantity of picture blocks that are marked with the specific marker and that are in a second set range in a vertical direction of the picture block.
(100) Step C2: Remove the specific marker of the picture block when the determined value of the first quantity or the determined value of the second quantity is greater than a third threshold.
(101) Step C3: Determine a value of a third quantity of picture blocks that are marked with the specific marker and that are in a third set range around the picture block.
(102) Step C4: Remove the specific marker of the picture block when the determined value of the third quantity is less than a fourth threshold.
(103) Step C5: Determine the picture block that is marked with the specific marker and that is in the current video frame as the small-object region in the current video frame.
(104) The third set range is smaller than the first set range and the second set range, and the fourth threshold is less than the third threshold.
(105) Preferably, in step C1, the first set range and the second set range may be set as needed or empirically, where the first set range may be larger than the second set range, the first set range may be smaller than the second set range, or the first set range may be equal to the second set range.
(106) During specific implementation, in step C2, the third threshold may be set as needed or empirically.
(107) Preferably, the third threshold may be determined according to a definition of the small-object region, for example, when a proportion of the small-object region to the background of the current video frame is less than a specific value, the third threshold may be determined according to the specific value.
(108) For example, assuming that the specific marker of each picture block included in the candidate small-object region in the current video frame is 1 and the third threshold is 6, as shown in
(109) During implementation, a region corresponding to an object except a small-object in the candidate small-object region included in the current video frame is filtered out by executing step C1 and step C2.
(110) Preferably, in step C3, the third set range may be set as needed or empirically.
(111) Preferably, in step C4, the fourth threshold may be set as needed or empirically.
(112) For example, assuming that the specific marker of each picture block included in the candidate small-object region in the current video frame is 1 and the fourth threshold is 2, as shown in
(113) During implementation, a region corresponding to a noise in the candidate small-object region included in the current video frame is filtered out by executing step C3 and step C4.
(114) During specific implementation, step C1 may be executed before step C3; step C3 may be executed before step C1; or step C1 and step C3 may be executed at a same time.
(115) With reference to
(116) As shown in
(117) Step 601: According to a to-be-processed picture block included in a small-object region determined in f.sub.n−1, classify to-be-processed picture blocks included in each video frame of f.sub.n−1, f.sub.n, f.sub.n+1, and f.sub.n+2 into a first-type to-be-processed picture block and a second-type to-be-processed picture block, determine an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block, and obtain an interframe motion vector between f.sub.n−1 and f.sub.n, an interframe motion vector between f.sub.n and f.sub.n+1, and an interframe motion vector between f.sub.n+1 and f.sub.n+2.
(118) Step 602: Divide f.sub.n into fourth parts, and collect statistics on a global motion vector of each part to obtain gVec[i], where i={0, 1, 2, 3}.
(119) Step 603: For a picture block included in f.sub.n, determine, according to an interframe motion vector mv.sub.oldF of the picture block, a matching block that is in f.sub.n+1 and that is of the picture block, and search for each nearby block corresponding to the matching block.
(120) Step 604: For a nearby block corresponding to the matching block in f.sub.n+1, determine an interframe motion vector mv.sub.newB of the nearby block, determine a sum of mv.sub.newB and mv.sub.oldF, and add 1 to a variable Ldiff0 when the sum of mv.sub.newB and mv.sub.oldF is less than a threshold objLocal.
(121) During specific implementation, an initial value of Ldiff0 may be set to 0.
(122) Step 605: For a nearby block corresponding to the matching block in f.sub.n+1, determine, according to a location of the nearby block in f.sub.n+1, that a global motion vector corresponding to the nearby block is gVec[1], determine a difference between mv.sub.newB and gVec[1], and add 1 to a variable Gdiff0 when the difference between mv.sub.newB and gVec[1] is greater than a threshold objGlobal.
(123) During specific implementation, an initial value of Gdiff0 may be set to 0.
(124) Step 606: Obtain Ldiff0 and Gdiff0 after each nearby block corresponding to the matching block in f.sub.n+1 is traversed.
(125) Step 607: For a picture block included in f.sub.n, determine, according to an interframe motion vector mv.sub.oldF of the picture block, a matching block that is in f.sub.n+1 and that is of the picture block, determine, according to an interframe motion vector mv.sub.newF of the matching block, a matching block that is in f.sub.n+2 and that is of the picture block, and search for each nearby block corresponding to the matching block.
(126) Step 608: For a nearby block corresponding to the matching block in f.sub.n+2, determine an interframe motion vector mv.sub.refB the nearby block, determine a sum of mv.sub.refB and mv.sub.oldF, and add 1 to a variable Ldiff1 when the sum of mv.sub.refB and mv.sub.oldF is less than the threshold objLocal.
(127) During specific implementation, an initial value of Ldiff1 may be set to 0.
(128) Step 609: For a nearby block corresponding to the matching block in f.sub.n+2, determine, according to a location of the nearby block in f.sub.n+2, that a global motion vector corresponding to the nearby block is gVec[1], determine a difference between mv.sub.refB and gVec[1], and add 1 to a variable Gdiff1 when the difference between mv.sub.refB and gVec[1] is greater than the threshold objGlobal.
(129) During specific implementation, an initial value of Gdiff1 may be set to 0.
(130) Step 610: Obtain Ldiff1 and Gdiff1 after each nearby block corresponding to the matching block in f.sub.n+2 is traversed.
(131) Step 611: For a picture block included in f.sub.n, determine, according to an interframe motion vector mv.sub.oldB of the picture block, a matching block that is in f.sub.n−1 and that is of the picture block, and search for each nearby block corresponding to the matching block.
(132) Step 612: For a nearby block corresponding to the matching block in f.sub.n−1, determine an interframe motion vector mv.sub.refF of the nearby block, determine a difference between mv.sub.refF and mv.sub.oldF and add 1 to a variable Ldiff2 when the difference between mv.sub.refF and mv.sub.oldF is less than the threshold objLocal.
(133) During specific implementation, an initial value of Ldiff2 may be set to 0.
(134) Step 613: For a nearby block corresponding to the matching block in f.sub.n−1 determine, according to a location of the nearby block in f.sub.n−1, that a global motion vector corresponding to the nearby block is gVec[1], determine a difference between mv.sub.refF and gVec[1], and add 1 to a variable Gdiff2 when the difference between mv.sub.refF and gVec[1] is greater than the threshold objGlobal.
(135) During specific implementation, an initial value of Gdiff2 may be set to 0.
(136) Step 614: Obtain Ldiff2 and Gdiff2 after each nearby block corresponding to the matching block in f.sub.n−1 is traversed.
(137) Step 615: When Gdiff0>A, Gdiff1>A, Gdiff2>A, Ldiff0>B, Ldiff1>B, and Ldiff2>B, determine that a candidate small-object region in f.sub.n includes the picture block, and buffer the picture block in markFlagBuf0.sub.ij.
(138) During specific implementation, A and B are thresholds, and markFlagBuf0.sub.ij may be a memory that stores the picture block in a matrix form.
(139) Step 616: Determine the candidate small-object region in f.sub.n by traversing each picture block included in f.sub.n, and mark each picture block included in the candidate small-object region with 1.
(140) Step 617: For each picture block that is marked with 1 and that is in f.sub.n, determine a value of a first quantity of picture blocks that are marked with 1 and that are in a first set range in a horizontal direction of the picture block and a value of a second quantity of picture blocks that are marked with 1 and that are in a second set range in a vertical direction of the picture block, and alter the marker of the picture block to 0 when the value of the first quantity or the value of the second quantity is greater than a threshold C.
(141) Step 618: For each picture block that is marked with 1 and that is in f.sub.n, determine a value of a third quantity of picture blocks that are marked with 1 and that are in a third set range around the picture block, and alter the marker of the picture block to 0 when the value of the third quantity is less than a threshold D.
(142) Step 619: Determine the picture block that is marked with 1 and that is in f.sub.n as the small-object region in f.sub.n.
(143) As shown in
(144) Step 701: Determine a small-object region in a former video frame and a small-object region in a latter video frame, where the former video frame and the latter video frame are two consecutively adjacent video frames.
(145) Step 702: Perform smooth filtering on an interframe motion vector corresponding to a region except the small-object region in the former video frame and the small-object region in the latter video frame.
(146) Step 703: Interpolate a frame between the two consecutively adjacent video frames according to an interframe motion vector corresponding to the small-object region in the former video frame, an interframe motion vector corresponding to the small-object region in the latter video frame, and an interframe motion vector obtained after the smooth filtering.
(147) Preferably, in step 701, the method for determining a small-object region in a current video frame shown in
(148) During specific implementation, an interframe motion vector between the two consecutively adjacent video frames may be determined according to an interframe motion vector between the two consecutively adjacent video frames that is determined when the small-object region in the former video frame and the small-object region in the latter video frame are determined.
(149) Preferably, in step 702, an implementation manner for performing the smooth filtering on the interframe motion vector corresponding to the region except the small-object region in the former video frame and the small-object region in the latter video frame is similar to an implementation manner for performing smooth filtering on an interframe motion vector between two adjacent video frames in the prior art. However, in the present disclosure, the smooth filtering is not performed on interframe motion vectors corresponding to small-object regions in two adjacent video frames, and the smooth filtering is performed only on an interframe motion vector corresponding to a region except the small-object regions in the two adjacent video frames.
(150) Preferably, in step 703, an implementation manner for interpolating the frame between the two consecutively adjacent video frames according to the interframe motion vector corresponding to the small-object region in the former video frame, the interframe motion vector corresponding to the small-object region in the latter video frame, and the interframe motion vector obtained after the smooth filtering is similar to an implementation manner for interpolating a frame between two adjacent video frames according to an interframe motion vector between the two adjacent video frames in the prior art, and details are not repeatedly described herein.
(151) During implementation, a small-object region in each video frame of two consecutively adjacent video frames is determined, smooth filtering is not performed on interframe motion vectors corresponding to the small-object regions determined in the two consecutively adjacent video frames, so that absence of pixels of the small-object in the generated interpolated frame is avoided, thereby improving display quality of a video.
(152) As shown in
(153) Step 801: Classify, according to a to-be-processed picture block included in a small-object region in preceding N video frames of the two consecutively adjacent video frames, to-be-processed picture blocks included in each video frame of the two consecutively adjacent video frames into a first-type to-be-processed picture block and a second-type to-be-processed picture block.
(154) Step 802: Separately determine an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block, and use the determined interframe motion vector of each first-type to-be-processed picture block included in each video frame of the two consecutively adjacent video frames and the determined interframe motion vector of each second-type to-be-processed picture block included in each video frame of the two consecutively adjacent video frames as an interframe motion vector between the two consecutively adjacent video frames.
(155) Step 803: Interpolate a frame between the two consecutively adjacent video frames according to the obtained interframe motion vector between the two consecutively adjacent video frames.
(156) N is a positive integer.
(157) Preferably, N may be set as needed or empirically.
(158) Preferably, in step 801, the method for determining a small-object region in a current video frame shown in
(159) Preferably, in step 801, the classifying, according to a to-be-processed picture block included in a small-object region in preceding N video frames of the two consecutively adjacent video frames, to-be-processed picture blocks included in each video frame of the two consecutively adjacent video frames into a first-type to-be-processed picture block and a second-type to-be-processed picture block includes, for the to-be-processed picture block included in each video frame of the two consecutively adjacent video frames, executing the following: selecting at least one video frame from the preceding N video frames of the two consecutively adjacent video frames; determining, according to the small-object region in the preceding N video frames of the two consecutively adjacent video frames, whether a reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is a picture block included in the small-object region; and if yes, determining that the to-be-processed picture block is a first-type to-be-processed picture block; otherwise, determining that the to-be-processed picture block is a second-type to-be-processed picture block.
(160) Preferably, the selecting at least one video frame from the preceding N video frames includes selecting at least one preceding continuous video frame of the two consecutively adjacent video frames.
(161) Preferably, in step 802, the determining an interframe motion vector of each first-type to-be-processed picture block includes the following steps.
(162) Step D1: Determine a value of a dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and a background motion vector of a video frame in which the first-type to-be-processed picture block is located.
(163) During specific implementation, an implementation manner of step D1 is similar to the implementation manner of step A1 in the embodiments of the present disclosure, and details are not repeatedly described herein.
(164) Step D2: Assign a corresponding weight to each candidate motion vector according to the determined value of the dissimilarity corresponding to each candidate motion vector by using a rule that a smaller weight is assigned to a larger value of the dissimilarity.
(165) During specific implementation, an implementation manner of step D2 is similar to the implementation manner of step A2 in the embodiments of the present disclosure, and details are not repeatedly described herein.
(166) Step D3: Determine the interframe motion vector of the first-type to-be-processed picture block according to the weight assigned to each candidate motion vector and a SAD value of pixels of a picture block pointed to by each candidate motion vector and pixels of the first-type to-be-processed picture block.
(167) During specific implementation, an implementation manner of step D3 is similar to the implementation manner of step A3 in the embodiments of the present disclosure, and details are not repeatedly described herein.
(168) Preferably, in step D3, the determining the interframe motion vector of the first-type to-be-processed picture block includes, for each candidate motion vector, determining a product of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and the pixels of the first-type to-be-processed picture block, and using a candidate motion vector with a smallest product as the interframe motion vector of the first-type to-be-processed picture block.
(169) It should be noted that, there are multiple implementation manners for determining the interframe motion vector of the first-type to-be-processed picture block according to the weight corresponding to each candidate motion vector of the first-type to-be-processed picture block and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and pixels of the first-type to-be-processed picture block, for example, for each candidate motion vector corresponding to the first-type to-be-processed picture block, a sum of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and pixels of the first-type to-be-processed picture block is determined, and a candidate motion vector with a smallest sum is used as the interframe motion vector of the first-type to-be-processed picture block. Implementation manners enumerated in this embodiment of the present disclosure are merely exemplary implementation manners.
(170) Preferably, in step 803, an implementation manner for interpolating the frame between the two consecutively adjacent video frames according to the obtained interframe motion vector between the two consecutively adjacent video frames is similar to an implementation manner for interpolating a frame between two adjacent video frames according to an interframe motion vector between the two adjacent video frames in the prior art, and details are not repeatedly described herein.
(171) During implementation, smooth filtering is not performed on an interframe motion vector between two consecutively adjacent video frames, and a frame is interpolated between the two adjacent video frames directly according to an obtained interframe motion vector between the two adjacent video frames, so that absence of pixels of the small-object in the generated interpolated frame is avoided, thereby improving playback quality of a high-definition video or an ultra-high-definition video.
(172) It should be noted that, there are multiple implementation manners of the method for interpolating a frame between two adjacent video frames in the present disclosure, and the following describes, by using interpolation of a frame between f.sub.n and f.sub.n+1 as an example, in detail three exemplary implementation manners for interpolating a frame between two adjacent video frames according to this embodiment of the present disclosure.
(173) It is assumed that reference frames of f.sub.n are f.sub.n−1 and f.sub.n+1, reference frames of f.sub.n+1 are f.sub.n and f.sub.n+2.
Implementation Manner 1
(174) As shown in
(175) Step 901: Perform motion estimation on f.sub.n−1, f.sub.n, and f.sub.n+1, and determine an interframe motion vector of any two adjacent frames in multiple video frames that include f.sub.n−1, f.sub.n and f.sub.n+1.
(176) Step 902: Divide f.sub.n into four regions, and determine a global motion vector corresponding to each region.
(177) Step 903: Determine a candidate small-object region in f.sub.n according to the interframe motion vector of the any two adjacent frames in the multiple video frames that include f.sub.n−1, f.sub.n, and f.sub.n+1 and the determined global motion vector corresponding to each region.
(178) Step 904: Perform filtering on the candidate small-object region in f.sub.n, and determine a region obtained after the filtering as a small-object region in f.sub.n.
(179) Step 905: Perform motion estimation on f.sub.n, f.sub.n+1, and f.sub.n−2, and determine an interframe motion vector of any two adjacent frames in multiple video frames that include f.sub.n, f.sub.n+1, and f.sub.n+2.
(180) During specific implementation, because the motion estimation is already performed on f.sub.n and f.sub.n+1 in step 901, the motion estimation may be performed only on f.sub.n+1 and f.sub.n+2 in step 905 to obtain an interframe motion vector between f.sub.n+1 and f.sub.n+2.
(181) Step 906: Divide f.sub.n+1 into four regions, and determine a global motion vector corresponding to each region.
(182) Step 907: Determine a candidate small-object region in f.sub.n+1 according to the interframe motion vector of the any two adjacent frames in the multiple video frames that include f.sub.n, f.sub.n+1, and f.sub.n+2 and the determined global motion vector corresponding to each region.
(183) Step 908: Perform filtering on the candidate small-object region in f.sub.n+1, and determine a region obtained after the filtering as a small-object region in f.sub.n+1.
(184) Step 909: Perform smooth filtering on an interframe motion vector corresponding to a region except the small-object region in f.sub.n and the small-object region in f.sub.n+1.
(185) It should be noted that, a forward motion vector and a backward motion vector between f.sub.n and f.sub.n+1 are determined at a same time both in step 901 and step 905, interframe motion vectors of f.sub.n and f.sub.n+1 in step 909 are the forward motion vector and the backward motion vector that are between f.sub.n and f.sub.n+1 and that are determined in step 905.
(186) It should be noted that, step 901 to step 909 of this embodiment of the present disclosure are descriptions about determining small-object regions in f.sub.n and f.sub.n+1 at different times; but in a specific application, the small-object regions in f.sub.n and f.sub.n+1 may be determined at a same time, and when the small-object regions in f.sub.n and f.sub.n+1 are determined at the same time, motion estimation needs to be performed on f.sub.n−1, f.sub.n, f.sub.n+1, and f.sub.n+2 at a same time. Steps of determining the small-object regions in f.sub.n and f.sub.n+1 at a same time are as follows.
(187) Step 1: Perform motion estimation on f.sub.n−1, f.sub.n, f.sub.n+1, and f.sub.n+2, and determine an interframe motion vector of any two adjacent frames in multiple video frames that include f.sub.n−1, f.sub.n, f.sub.n+1, and f.sub.n+2.
(188) Step 2: Separately divide f.sub.n and f.sub.n+1 into four regions, and determine a global motion vector corresponding to each region.
(189) Step 3: Determine a candidate small-object region in f.sub.n according to an interframe motion vector of any two adjacent frames in multiple video frames that include f.sub.n−1, f.sub.n, and f.sub.n+1 and the determined global motion vector corresponding to each region in f.sub.n; and determine a candidate small-object region in f.sub.n+1 according to an interframe motion vector of any two adjacent frames in multiple video frames that include f.sub.n, f.sub.n+1, and f.sub.n+2 and the determined global motion vector corresponding to each region in f.sub.n+1.
(190) Step 4: Perform filtering on the candidate small-object region in f.sub.n, and determine a region obtained after the filtering as the small-object region of f.sub.n; and perform filtering on the candidate small-object region in f.sub.n+1, and determine a region obtained after the filtering as the small-object region in f.sub.n+1.
(191) Step 910: Interpolate a frame between f.sub.n and f.sub.n+1 according to an interframe motion vector corresponding to the small-object region in f.sub.n, an interframe motion vector corresponding to the small-object region in f.sub.n+1, and an interframe motion vector obtained after the smooth filtering.
(192) During specific implementation, to-be-processed picture blocks included in each video frame in two consecutively adjacent video frames may be classified according to a small-object region determined in a video frame before the two consecutively adjacent video frames, which is described in the following Implementation Manner 2.
Implementation Manner 2
(193) As shown in
(194) Step 1001: Classify, according to a to-be-processed picture block included in a small-object region determined in f.sub.n−1, to-be-processed picture blocks included in each video frame in multiple video frames that include f.sub.n−1, f.sub.n, and f.sub.n+1 into a first-type to-be-processed picture block and a second-type to-be-processed picture block, separately determine an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block, and obtain an interframe motion vector of any two adjacent frames in the multiple video frames that include f.sub.n−1, f.sub.n, and f.sub.n+1.
(195) Step 1002: Divide f.sub.n into four regions, and determine a global motion vector corresponding to each region.
(196) Step 1003: Determine a candidate small-object region in f.sub.n according to the interframe motion vector of the any two adjacent frames in the multiple video frames that include f.sub.n−1, f.sub.n, and f.sub.n+1 and the determined global motion vector corresponding to each region.
(197) Step 1004: Perform filtering on the candidate small-object region in f.sub.n, and determine a region obtained after the filtering as a small-object region in f.sub.n.
(198) Step 1005: Classify, according to to-be-processed picture blocks included in the small-object regions determined in f.sub.n−1 and f.sub.n, to-be-processed picture blocks included in each video frame in multiple video frames that include f.sub.n, f.sub.n+1, and f.sub.n+2 into a first-type to-be-processed picture block and a second-type to-be-processed picture block, separately determine an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block, and obtain an interframe motion vector of any two adjacent frames in the multiple video frames that include f.sub.n, f.sub.n+1, and f.sub.n+2.
(199) Step 1006: Divide f.sub.n+1 into four regions, and determine a global motion vector corresponding to each region.
(200) Step 1007: Determine a candidate small-object region in f.sub.n+1 according to the interframe motion vector of the any two adjacent frames in the multiple video frames that include f.sub.n, f.sub.n+1, and f.sub.n+2 and the determined global motion vector corresponding to each region.
(201) Step 1008: Perform filtering on the candidate small-object region in f.sub.n+1, and determine a region obtained after the filtering as a small-object region in f.sub.n+1.
(202) Step 1009: Perform smooth filtering on an interframe motion vector corresponding to a region except the small-object region in f.sub.n and the small-object region in f.sub.n+1.
(203) Step 1010: Interpolate a frame between f and f.sub.n+1 according to an interframe motion vector corresponding to the small-object region in f.sub.n, an interframe motion vector corresponding to the small-object region in f.sub.n+1, and an interframe motion vector obtained after the smooth filtering.
(204) During specific implementation, smooth filtering may not be performed on an interframe motion vector between f and f.sub.n+1, and a frame is interpolated between f.sub.n and f.sub.n+1 directly according to the interframe motion vector between f.sub.n and f.sub.n+1, which is described in the following Implementation Manner 3.
Implementation Manner 3
(205) As shown in
(206) Step 1101: Determine a small-object region in f.sub.n−1.
(207) Step 1102: Classify, according to a to-be-processed picture block included in the small-object region determined in f.sub.n−1, to-be-processed picture blocks included in f.sub.n and f.sub.n+1 into a first-type to-be-processed picture block and a second-type to-be-processed picture block, separately determine an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block, and obtain an interframe motion vector between f.sub.n and f.sub.n+1.
(208) Step 1103: Interpolate a frame between f.sub.n and f.sub.n+1 according to the interframe motion vector between f.sub.n and f.sub.n+1.
(209) Based on a same disclosure concept, embodiments of the present disclosure further provide an apparatus for determining a small-object region in a video frame and an apparatus for interpolating a frame between two adjacent video frames, of which principles are similar to those of the method for determining a small-object region in a video frame and those of the method for interpolating a frame between two adjacent video frames, and therefore, during implementation, reference may be made to the methods, and details are not repeatedly described.
(210)
(211) The reference frame of the current video frame includes one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame.
(212) Preferably, the apparatus further includes a classification unit 1205 configured to execute, before the interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame is determined, the following for a to-be-processed picture block included in each video frame in the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame: selecting at least one video frame from preceding N video frames of the current video frame; determining, according to a small-object region determined in the preceding N video frames of the current video frame, whether a reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is a picture block included in the small-object region; and if yes, determining that the to-be-processed picture block is a first-type to-be-processed picture block; otherwise, determining that the to-be-processed picture block is a second-type to-be-processed picture block.
(213) The interframe motion vector determining unit 1202 is configured to separately determine an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block; and use the determined interframe motion vector of each first-type to-be-processed picture block included in each video frame in the each group of adjacent frames and the determined interframe motion vector of each second-type to-be-processed picture block included in each video frame in the each group of adjacent frames as the interframe motion vector of the each group of adjacent frames.
(214) Preferably, the interframe motion vector determining unit 1202 is configured to determine a value of a dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and a background motion vector of a video frame in which the first-type to-be-processed picture block is located; assign a corresponding weight to each candidate motion vector according to the determined value of the dissimilarity corresponding to each candidate motion vector by using a rule that a smaller weight is assigned to a larger value of the dissimilarity; and determine the interframe motion vector of the first-type to-be-processed picture block according to the weight assigned to each candidate motion vector and a SAD value of pixels of a picture block pointed to by each candidate motion vector and pixels of the first-type to-be-processed picture block.
(215) Preferably, the classification unit 1205 is configured to select at least one preceding continuous video frame of the current video frame.
(216) Preferably, the interframe motion vector determining unit 1202 is configured to, for each candidate motion vector, determine a product of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and the pixels of the first-type to-be-processed picture block, and use a candidate motion vector with a smallest product as the interframe motion vector of the first-type to-be-processed picture block.
(217) Preferably, the region determining unit 1203 is configured to determine, in each reference frame of the current video frame and according to the interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame, a matching block corresponding to each picture block included in the current video frame; determine, in each reference frame, a nearby block near the matching block, and determine an interframe motion vector of each nearby block determined in each reference frame; for each picture block included in the current video frame, determine a value of a similarity between the interframe motion vector of each nearby block determined for the picture block and an interframe motion vector of the picture block, and determine a value of a dissimilarity between the interframe motion vector and a global motion vector that are of each nearby block; and determine, according to the determined value of the similarity and the determined value of the dissimilarity, a picture block included in the candidate small-object region in the current video frame, where each picture block included in the candidate small-object region meets the following: in multiple nearby blocks that are determined for the picture block and that are included in each reference frame corresponding to the current video frame, there are a first set quantity of nearby blocks whose values of similarities are all greater than or equal to a first threshold and there are a second set quantity of nearby blocks whose values of dissimilarities are all greater than or equal to a second threshold.
(218) Preferably, the apparatus further includes a marking unit 1206 configured to mark, before filtering is performed on the candidate small-object region in the current video frame, a specific marker on each picture block included in the candidate small-object region.
(219) The processing unit 1204 is configured to, for each picture block included in the candidate small-object region in the current video frame, determine a value of a first quantity of picture blocks that are marked with the specific marker and that are in a first set range in a horizontal direction of the picture block, and a value of a second quantity of picture blocks that are marked with the specific marker and that are in a second set range in a vertical direction of the picture block; remove the specific marker of the picture block when the determined value of the first quantity or the determined value of the second quantity is greater than a third threshold; determine a value of a third quantity of picture blocks that are marked with the specific marker and that are in a third set range around the picture block; remove the specific marker of the picture block when the determined value of the third quantity is less than a fourth threshold; and determine the picture block that is marked with the specific marker and that is in the current video frame as the small-object region in the current video frame; where the third set range is smaller than the first set range and the second set range, and the fourth threshold is less than the third threshold.
(220)
(221)
(222) Preferably, the classification unit 1401 is configured to execute the following for the to-be-processed picture block included in each video frame of the two consecutively adjacent video frames: selecting at least one video frame from the preceding N video frames of the two consecutively adjacent video frames; determining, according to the small-object region in the preceding N video frames of the two consecutively adjacent video frames, whether a reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is a picture block included in the small-object region; and if yes, determining that the to-be-processed picture block is a first-type to-be-processed picture block; otherwise, determining that the to-be-processed picture block is a second-type to-be-processed picture block.
(223) Preferably, the processing unit 1402 is configured to determine a value of a dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and a background motion vector of a video frame in which the first-type to-be-processed picture block is located; assign a corresponding weight to each candidate motion vector according to the determined value of the dissimilarity corresponding to each candidate motion vector by using a rule that a smaller weight is assigned to a larger value of the dissimilarity; and determine the interframe motion vector of the first-type to-be-processed picture block according to the weight assigned to each candidate motion vector and a SAD value of pixels of a picture block pointed to by each candidate motion vector and pixels of the first-type to-be-processed picture block.
(224) Preferably, the classification unit 1401 is configured to select at least one preceding continuous video frame of the two consecutively adjacent video frames.
(225) Preferably, the processing unit 1402 is configured to, for each candidate motion vector, determine a product of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and the pixels of the first-type to-be-processed picture block, and use a candidate motion vector with a smallest product as the interframe motion vector of the first-type to-be-processed picture block.
(226)
(227) The reference frame of the current video frame includes one or more of preceding continuous video frames of the current video frame and following continuous video frames of the current video frame.
(228) During specific implementation, the processor 1501 may directly invoke a required vector or required information from the memory 1502; or the processor 1501 may send a vector or information acquiring instruction to the memory 1502, and the memory 1502 sends, to the processor 1501, a vector or information requested in the instruction sent by the processor 1501.
(229) Preferably, the processor 1501 is further configured to execute, before the interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame is determined, the following for a to-be-processed picture block included in each video frame in the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame: selecting at least one video frame from preceding N video frames of the current video frame; determining, according to a small-object region determined in the preceding N video frames of the current video frame, whether a reference picture block that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is a picture block included in the small-object region; and if yes, determining that the to-be-processed picture block is a first-type to-be-processed picture block; otherwise, determining that the to-be-processed picture block is a second-type to-be-processed picture block.
(230) The processor 1501 is configured to separately determine an interframe motion vector of each first-type to-be-processed picture block and an interframe motion vector of each second-type to-be-processed picture block; and use the determined interframe motion vector of each first-type to-be-processed picture block included in each video frame in the each group of adjacent frames and the determined interframe motion vector of each second-type to-be-processed picture block included in each video frame in the each group of adjacent frames as the interframe motion vector of the each group of adjacent frames.
(231) Preferably, the processor 1501 is configured to determine a value of a dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and a background motion vector of a video frame in which the first-type to-be-processed picture block is located; assign a corresponding weight to each candidate motion vector according to the determined value of the dissimilarity corresponding to each candidate motion vector by using a rule that a smaller weight is assigned to a larger value of the dissimilarity; and determine the interframe motion vector of the first-type to-be-processed picture block according to the weight assigned to each candidate motion vector and a SAD value of pixels of a picture block pointed to by each candidate motion vector and pixels of the first-type to-be-processed picture block.
(232) Preferably, the processor 1501 is configured to select at least one preceding continuous video frame of the current video frame.
(233) Preferably, the processor 1501 is configured to, for each candidate motion vector, determine a product of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and the pixels of the first-type to-be-processed picture block, and use a candidate motion vector with a smallest product as the interframe motion vector of the first-type to-be-processed picture block.
(234) Preferably, the processor 1501 is configured to determine, in each reference frame of the current video frame and according to the interframe motion vector of the each group of adjacent frames in the multiple video frames that include the current video frame and the reference frame of the current video frame, a matching block corresponding to each picture block included in the current video frame; determine, in each reference frame, a nearby block near the matching block, and determine an interframe motion vector of each nearby block determined in each reference frame; for each picture block included in the current video frame, determine a value of a similarity between the interframe motion vector of each nearby block determined for the picture block and an interframe motion vector of the picture block, and determine a value of a dissimilarity between the interframe motion vector and a global motion vector that are of each nearby block; and determine, according to the determined value of the similarity and the determined value of the dissimilarity, a picture block included in the candidate small-object region in the current video frame, where each picture block included in the candidate small-object region meets the following: in multiple nearby blocks that are determined for the picture block and that are included in each reference frame corresponding to the current video frame, there are a first set quantity of nearby blocks whose values of similarities are all greater than or equal to a first threshold and there are a second set quantity of nearby blocks whose values of dissimilarities are all greater than or equal to a second threshold.
(235) Preferably, the processor 1501 is further configured to mark, before filtering is performed on the candidate small-object region in the current video frame, a specific marker on each picture block included in the candidate small-object region.
(236) The processor 1501 is configured to, for each picture block included in the candidate small-object region in the current video frame, determine a value of a first quantity of picture blocks that are marked with the specific marker and that are in a first set range in a horizontal direction of the picture block, and a value of a second quantity of picture blocks that are marked with the specific marker and that are in a second set range in a vertical direction of the picture block; remove the specific marker of the picture block when the determined value of the first quantity or the determined value of the second quantity is greater than a third threshold; determine a value of a third quantity of picture blocks that are marked with the specific marker and that are in a third set range around the picture block; remove the specific marker of the picture block when the determined value of the third quantity is less than a fourth threshold; and determine the picture block that is marked with the specific marker and that is in the current video frame as the small-object region in the current video frame; where the third set range is smaller than the first set range and the second set range, and the fourth threshold is less than the third threshold.
(237)
(238) During specific implementation, the processor 1601 may directly invoke a required vector or required information from the memory 1602; or the processor 1601 may send a vector or information acquiring instruction to the memory 1602, and the memory 1602 sends, to the processor 1601, a vector or information requested in the instruction sent by the processor 1601.
(239)
(240) During specific implementation, the processor 1701 may directly invoke a required vector or required information from the memory 1702; or the processor 1701 may send a vector or information acquiring instruction to the memory 1702, and the memory 1702 sends, to the processor 1701, a vector or information requested in the instruction sent by the processor 1701.
(241) Preferably, the processor 1701 is configured to execute the following for the to-be-processed picture block included in each video frame of the two consecutively adjacent video frames: selecting at least one video frame from the preceding N video frames of the two consecutively adjacent video frames; determining, according to the small-object region in the preceding N video frames of the two consecutively adjacent video frames, whether a reference picture blocks that is in the selected at least one video frame and that is corresponding to the to-be-processed picture block is a picture block included in the small-object region; and if yes, determining that the to-be-processed picture block is a first-type to-be-processed picture block; otherwise, determining that the to-be-processed picture block is a second-type to-be-processed picture block.
(242) Preferably, the processor 1701 is configured to determine a value of a dissimilarity between each candidate motion vector corresponding to the first-type to-be-processed picture block and a background motion vector of a video frame in which the first-type to-be-processed picture block is located; assign a corresponding weight to each candidate motion vector according to the determined value of the dissimilarity corresponding to each candidate motion vector by using a rule that a smaller weight is assigned to a larger value of the dissimilarity; and determine the interframe motion vector of the first-type to-be-processed picture block according to the weight assigned to each candidate motion vector and a SAD value of pixels of a picture block pointed to by each candidate motion vector and pixels of the first-type to-be-processed picture block.
(243) Preferably, the processor 1701 is configured to select at least one preceding continuous video frame of the two consecutively adjacent video frames.
(244) Preferably, the processor 1701 is configured to, for each candidate motion vector, determine a product of the weight assigned to the candidate motion vector and the SAD value of the pixels of the picture block pointed to by the candidate motion vector and the pixels of the first-type to-be-processed picture block, and use a candidate motion vector with a smallest product as the interframe motion vector of the first-type to-be-processed picture block.
(245) Persons skilled in the art should understand that the embodiments of the present disclosure may be provided as a method, a system, or a computer program product. Therefore, the present disclosure may use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. Moreover, the present disclosure may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a compact disc read-only memory (CD-ROM), an optical memory, and the like) that include computer-usable program code.
(246) The present disclosure is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the embodiments of the present disclosure. It should be understood that computer program instructions may be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions may be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of any other programmable data processing device to generate a machine, so that the instructions executed by a computer or a processor of any other programmable data processing device generate an apparatus for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
(247) These computer program instructions may also be stored in a computer readable memory that can instruct the computer or any other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
(248) These computer program instructions may also be loaded onto a computer or another programmable data processing device, so that a series of operations and steps are performed on the computer or the another programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or the another programmable device provide steps for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
(249) Although some exemplary embodiments of the present disclosure have been described, persons skilled in the art can make changes and modifications to these embodiments once they learn the basic inventive concept. Therefore, the following claims are intended to be construed as to cover the exemplary embodiments and all changes and modifications falling within the scope of the present disclosure.
(250) Obviously, persons skilled in the art can make various modifications and variations to the embodiments of the present disclosure without departing from the spirit and scope of the embodiments of the present disclosure. The present disclosure is intended to cover these modifications and variations provided that they fall within the scope of protection defined by the following claims and their equivalent technologies.