METHODS FOR CONTROLLING VIDEO DECODER TO SELECTIVELY SKIP ONE OR MORE VIDEO FRAMES AND RELATED SIGNAL PROCESSING APPARATUSES THEREOF
20170332079 · 2017-11-16
Assignee
Inventors
Cpc classification
H04N21/4424
ELECTRICITY
H04N19/132
ELECTRICITY
H04N21/440281
ELECTRICITY
H04N19/44
ELECTRICITY
H04N21/44004
ELECTRICITY
International classification
H04N19/132
ELECTRICITY
H04N19/44
ELECTRICITY
H04N21/44
ELECTRICITY
Abstract
An exemplary method for processing an input bitstream having a plurality of video frames includes the following steps: deriving an indication data from decoding of a current video frame, and controlling a video decoder to decode or skip a next video frame by referring to at least the indication data and a video decoder capability of the video decoder. A signal processing apparatus for processing an input bitstream including a plurality of video frames includes a video decoder, an indication data estimating unit, and a controller. The video decoder is arranged to decode a current video frame. The indication data estimating unit is for deriving an indication data from decoding of the current video frame. The controller is for controlling the video decoder to decode or skip a next video frame by referring to at least the indication data and a video decoder capability of the video decoder.
Claims
1. A method for processing an input bitstream including a plurality of video frames, the method comprising: deriving an indication data from a bitstream of a current video frame before the current video frame is decoded or skipped; and controlling a video decoder to decode or skip the current video frame by referring to at least the indication data, wherein the indication data is generated based on a value of a bitstream length of the current video frame.
2. The method of claim 1, wherein the indication data include information indicative of complexity of the current video frame relative to previous video frame(s).
3. The method of claim 1, wherein the step of deriving the indication data comprises: reading the bitstream length of the current video frame from a frame header included in the bitstream of the current video frame; and generating the indication data based on the value of the bitstream length of the current video frame.
4. The method of claim 3, wherein the step of generating the indication data comprises: calculating a weighted average value of the bitstream length of the current video frame and a historical average value derived from the previous video frames; and determining the indication data based on the value of the bitstream length of the current video frame and the weighted average value.
5. The method of claim 4, wherein the weighted average value is calculated based on:
L.sub.Tn=α′×L.sub.Tn-1+(1−α′)×L.sub.Fn, wherein L.sub.Tn is the weighted average value, α is a weighting factor, L.sub.Tn-1 is the historical average value of bitstream lengths of previous video frames, and L.sub.Fn is the bitstream length of the current video frame.
6. The method of claim 5, wherein the indication data is generated based on:
S2=L.sub.Fn/L.sub.Tn, wherein S2 is the indication data, L.sub.Fn is the bitstream length of the current video frame, and L.sub.Tn is the weighted average value.
7. The method of claim 1, wherein the step of controlling the video decoder to decode or skip the current video frame comprises: controlling the video decoder to decode or skip the current video frame based on the indication data and a video decoder capability of the video decoder.
8. The method of claim 7, wherein the step of controlling the video decoder to decode or skip the current video frame comprises: determining a decision threshold based on at least the video decoder capability of the video decoder; and controlling the video decoder to decode or skip the current video frame based on a comparison result derived from the indication data and the decision threshold.
9. The method of claim 8, wherein the step of determining the decision threshold comprises: setting the decision threshold based on at least a status of a video frame buffer utilized for buffering decoded video frames generated from decoding video frames.
10. The method of claim 9, wherein the step of setting the decision threshold comprises: setting the decision threshold based on the status of the video frame buffer and a frame type of the current video frame.
11. The method of claim 8, wherein the step of determining the decision threshold comprises: setting the decision threshold based on at least a ratio between a video decoder frame rate and an input video frame rate.
12. The method of claim 11, wherein the step of setting the decision threshold comprises: setting the decision threshold based on the ratio and a frame type of the current video frame.
13. The method of claim 8, further comprising: when the video decoder capability of the video decoder is different from an expected video decoder capability, adjusting the decision threshold.
14. The method of claim 1, wherein when the current video frame is skipped by the video decoder: if the current video frame is a predictive frame (P-frame), a decoded video frame generated from decoding a video frame preceding the current video frame is displayed again during a period in which a decoded video frame generated from decoding the current video frame is originally displayed; if the current video frame is a B-frame, a decoded video frame generated from decoding a video frame following the current video frame or a decoded video frame generated from decoding the current video frame is displayed during a period in which the decoded video frame generated from decoding the current video frame is originally displayed; or a video playback associated with the current video frame is skipped.
15. The method of claim 1, wherein the value of the bitstream length of the current video frame is a numerical value.
16. A signal processing apparatus comprising: processing circuitry including: a video decoder; an indication data estimator that derives an indication data from a bitstream of a current video frame before the current video frame is decoded or skipped; and a controller that controls the video decoder to decode or skip the current video frame by referring to at least the indication data, wherein the indication data is generated based on a value of a bitstream length of the current video frame.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
DETAILED DESCRIPTION
[0025] Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. Also, the term “couple” is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
[0026]
[0027] Please refer to
[0028] Step 202: Decode a current video frame.
[0029] Step 204: Gather statistics of specific video characteristics obtained from decoding of the current video frame.
[0030] Step 206: Generate an indication data according to the gathered statistics of specific video characteristics.
[0031] Step 208: Determine a decision threshold according to at least the video decoder capability of the video decoder.
[0032] Step 210: Compare the indication data with the decision threshold and accordingly generate a comparison result.
[0033] Step 212: Control a video decoder to decode or skip the next video frame according to the comparison result.
[0034] In this exemplary embodiment, the indication data estimating unit 104 obtains the indication data S1 by performing steps 204 and 206. For example, the indication data estimating unit 104 generates the indication data S1 by calculating an accumulation value of the specific video characteristics corresponding to the current video frame F.sub.n decoded by the video decoder 102, calculating a weighted average value of the accumulation value and a historical average value derived from the previous video frame(s), and determining the indication data S1 according to the accumulation value and the weighted average value. By way of example, but not limitation, the specific video characteristics used for determining the indication data may be motion vectors, or discrete cosine transform (DCT) coefficients, or macroblock types (partition sizes and partition types). In one exemplary implementation, the indication data S1 transmitted to the controller 106 may be a value indicative of a ratio between the accumulation value and the weighted average value. In another exemplary implementation, the indication data S1 transmitted to the controller 106 may include the accumulation value and the weighted average value.
[0035] In a case where motion vectors obtained during the decoding of the current video frame F.sub.n are used for determining the indication data S1, the indication data estimating unit 104 obtains an accumulated motion vector MV.sub.F.sub.
[0036] In above formula (1), BlockNum represents the total number of blocks in the current video frame F.sub.n, and MV.sub.x,b and MV.sub.y,b represent motion vectors of x-dimension and y-dimension of a block indexed by a block index value b, respectively. It should be noted that an intra-coded block may be regarded as having infinitely large motion vectors in some embodiments. Thus, MV.sub.x,b and MV.sub.y,b are directly assigned by predetermined values (e.g., |MV.sub.x,b|=|MV.sub.y,b|=max MV) when a block indexed by a block index value b is an intra-coded block.
[0037] After the accumulation value MV.sub.F.sub.
MV.sub.T.sub.
[0038] In above formula (2), α represents a weighting vector. The historical accumulation value MV.sub.T.sub.
[0039] Next, the indication data estimating unit 104 determines the indication data S1 according to the accumulation value MV.sub.F.sub.
[0040] As can be seen from formula (3), the indication data S1 may be regarded as a comparison result of comparing the statistics of motion vectors of the current decoded video frame with the historical statistics of motion vectors of previous decoded video frame(s). In a case where each of the video frames included in the input bitstream S_IN has the same number of blocks, the indication data S1 is equivalent to a ratio of an average motion vector of the current video frame to an average motion vector in the time domain (i.e., a moving average of motion vectors of previous video frames).
[0041] The controller 106 controls the video decoder 102 to decode or skip the next video frame F.sub.n+1 by performing steps 208-212. Thus, the controller 106 decides whether the next video frame F.sub.n+1 will be skipped or decoded by referring to the comparison result (i.e.,
In this exemplary embodiment, the controller 106 further determines a decision threshold R according to at least the video decoder capability of the video decoder 102. Therefore, the controller 106 controls the video decoder 106 to decode or skip the next video frame F.sub.n+1 according to a comparison result derived from the indication data S1 and the decision threshold R. For example, the controller 106 compares the indication data S1 with the decision threshold R and accordingly generates a comparison result, and controls the video decoder 102 to decode or skip the next video frame F.sub.n+1 according to the comparison result.
[0042] Certain factors/parameters may reflect the video decoder capability of the video decoder 102. For example, the controller 106 may set the decision threshold R according to at least a ratio between a video decoder frame rate R1 and an input video frame rate R2 (e.g.,
Please refer to
[0043] Step 302: Check if the indication data S1 is smaller than the decision threshold R. If yes, go to step 304; otherwise, go to step 312.
[0044] Step 304: Control the video decoder 102 to skip the next video frame F.sub.n+1.
[0045] Step 306: Check if the video decoder capability of the video decoder 102 does not match (e.g., lower than) an expected video decoder capability. If yes, go to step 308; otherwise, go to step 310.
[0046] Step 308: Adjust the decision threshold R referenced for determining whether to decode or skip a video frame F.sub.n+3.
[0047] Step 310: Set the video frame F.sub.n+2 following the next video frame F.sub.n+1 as a current video frame to be decoded. Go to step 204.
[0048] Step 312: Control the video decoder 102 to decode the next video frame F.sub.n+1.
[0049] Step 314: Check if the video decoder capability of the video decoder 102 does not match (e.g., higher than) the expected video decoder capability. If yes, go to step 316; otherwise, go to step 318.
[0050] Step 316: Adjust the decision threshold R referenced for determining whether to decode or skip a video frame F.sub.n+2 following the next video frame F.sub.n+1.
[0051] Step 318: Set the next video frame F.sub.n+1 as a current video frame to be decoded. Go to step 204.
[0052] It should be noted that the decision threshold R is set by an initial value R.sub.ini corresponding to an expected video decoder capability of the video decoder 102. For example, the expected decoder frame rate R1.sub.exp and the expected input video frame rate R2.sub.exp are known in advance, and the decision threshold R would be initialized by the ratio between the expected decoder frame rate R1.sub.exp and the expected input video frame rate R2.sub.exp (e.g.,
or a value proportional to this ratio. Thus, when the video decoder 102 is dealing with the first video frame F.sub.0 of the input bitstream S_IN, the decision threshold R set by the initial value R.sub.ini would be used in step 302. In addition, the decision threshold R may be adaptively/dynamically updated in the following procedure for dealing with subsequent video frames (step 308/316).
[0053] When the indication data S1 (e.g.,
is found smaller than the current decision threshold R, it implies that the complexity of the current video frame F.sub.n relative to previous video frames F.sub.0-F.sub.n−1 is low. There is a high possibility that the complexity of the next video frame F.sub.n+1 relative to previous video frames F.sub.0-F.sub.n is also low. Based on such assumption, the controller 102 judges that decoding of the next video frame F.sub.n+1 is allowed to be skipped when the indication data S1 is found smaller than the current decision threshold R (steps 302 and 304). On the other hand, the controller 102 judges that decoding of the next video frame F.sub.n+1 should be performed when the indication data S1 is not smaller than the current decision threshold R (steps 302 and 312).
[0054] As mentioned above, the decision threshold R may be adaptively updated in this exemplary embodiment. In step 306, it is checked to see if the video decoder capability of the video decoder 102 is lower than the expected video decoder capability. For example, the ratio of the actual decoder frame rate R1.sub.act to the actual input video frame rate R2.sub.act (i.e., the ratio of the number of decoded video frames to the number of input video frames) is compared with the ratio of the expected decoder frame rate R1.sub.exp to the expected input video frame rate R2.sub.exp. When
is smaller than
it implies that too many frames are skipped due to the decision threshold R higher than what is actually needed. Thus, the decision threshold R will be decreased to make the subsequent video frame tend to be decoded. On the other hand, when
is not smaller than
no adjustment is made to the current decision threshold R. The operations of steps 306 and 308 can be expressed as follows.
[0055] In above formulas (4) and (5), β.sub.1 is a scaling factor between 0 and 1 (i.e., 0<β.sub.1<1).
[0056] In step 314, it is checked to see if the video decoder capability of the video decoder 102 is higher than the expected video decoder capability. For example, the ratio of the actual decoder frame rate R1.sub.act to the actual input video frame rate R2.sub.act (i.e., the ratio of the number of decoded video frames to the number of input video frames) is compared with the ratio of the expected decoder frame rate R1.sub.exp to the expected input video frame rate R2.sub.exp. When
exceeds
it implies that too many frames are decoded due to the current decision threshold R lower than what is actually needed. Thus, the decision threshold R will be increased to make the video frame tend to be skipped. On the other hand, when
does not exceed
no adjustment is made to the current decision threshold R. The operations of steps 314 and 316 can be expressed as follows.
[0057] In above formulas (6) and (7), β.sub.2 is a scaling factor between 0 and 1 (i.e., 0<β.sub.2<1). It should be noted that the scaling factor β.sub.1 may be equal to or different from the scaling factor β.sub.2, depending upon actual design consideration.
[0058] The decision threshold R may be adaptively updated according to above formulas (3)-(7) for better video decoding performance. However, this is for illustrative purposes only, and is not meant to be a limitation of the present invention. That is, the spirit of the present invention is obeyed as long as the video decoder capability of the video decoder is referenced for determining the decision threshold R.
[0059] The video frames of the input bitstream S_IN include intra-coded frames (I-frames), predictive frames (P-frames), and Bi-directional predictive frames (B-frames). In general, I-frames are the least compressible but don't require other video frames to decode, P-frames can use data from previous frames to decompress and are more compressible than I-frames, and B-frames can use both previous and following frames for data reference to get the highest amount of data compression. Therefore, skipping/dropping a B-frame is more preferable than skipping/dropping a P-frame, and skipping/dropping a P-frame is more preferable than skipping/dropping an I-frame. In an alternative design, the decision thresholds are set or adaptively updated for different frame types, respectively. That is, the controller 106 is arranged to set the decision threshold R according to the ratio between the video decoder frame rate and the input video frame rate and a frame type of the next video frame. By way of example, but not limitation, decision thresholds R_I, R_P, and R_B for I-frame, P-frame, and B-frame may have the following exemplary relationship.
R_I<<R_P<R_B (8)
[0060] Under a condition where the decision thresholds R_I, R_P, and R_B are properly configured to guarantee that the above exemplary relationship is met, the aforementioned scaling factor β.sub.1/β.sub.2 for one frame type may be different from that for another frame type. For example, scaling factors β.sub.1.sub._I/β.sub.2.sub._I, β.sub.1.sub._P/β.sub.2.sub._P, and β.sub.1.sub._B/β.sub.2.sub._B for I-frame, P-frame, and B-frame may have the following exemplary relationship. However, this is for illustrative purposes only, and is not meant to be a limitation of the present invention.
β.sub.1.sub._I<β.sub.1.sub._P<β.sub.1.sub._B (9)
β.sub.2.sub._I>β.sub.2.sub._P>β.sub.2.sub._B (10)
[0061] In addition to the aforementioned ratio between a video decoder frame rate and an input video frame rate, the video decoder capability of the video decoder 102 may be reflected by other factors/parameters. For example, the signal processing apparatus 100 may include the video frame buffer 108 acting as a display queue for buffering decoded video frames generated from the video decoder 102. Thus, a video driving circuit (not shown) may drive a display apparatus (not shown) according to the decoded video frames buffered in the video frame buffer 108 for video playback. In an alternative exemplary embodiment, the controller 106 may set the decision threshold R according to at least a status of the video frame buffer 108. As the number of decoded video frames buffered in the video frame buffer 108 is positively correlated to the video decoder capability, the status of the video frame buffer 108 may be referenced to properly set the decision threshold R used for determining whether the next video frame F.sub.n+1 should be decoded or skipped.
[0062] Please refer to
[0063] Step 402: Check if the indication data S1 is smaller than the decision threshold R(k). If yes, go to step 404; otherwise, go to step 408.
[0064] Step 404: Control the video decoder 102 to skip the next video frame F.sub.n+1.
[0065] Step 406: Set the video frame F.sub.n+2 following the next video frame F.sub.n+1 as a current video frame to be decoded. Go to step 204.
[0066] Step 408: Control the video decoder 102 to decode the next video frame F.sub.n+1.
[0067] Step 410: Set the next video frame F.sub.n+1 as a current video frame to be decoded. Go to step 204.
[0068] It should be noted that the decision threshold R(k) may be a function of the total number of decoded video frames in the video frame buffer 108. For example, the decision threshold R(k) may be set using following formulas.
[0069] In above formulas (11)-(13), e represents the base of the natural logarithm, A and B are predetermined coefficients, k represents the total number of decoded video frames available in the video frame buffer 108, and j represents a predetermined tendency switch point. Please refer to
[0070] When the indication data S1 (e.g.,
is found smaller than the current decision threshold R(k), it implies that the complexity of the current video frame F.sub.n relative to previous video frames F.sub.0-F.sub.n−1 is low. There is a high possibility that the complexity of the next video frame F.sub.n+1 relative to previous video frames F.sub.0-F.sub.n is also low. Based on such assumption, the controller 102 judges that decoding of the next video frame F.sub.n+1 is allowed to be skipped when the indication data S1 is found smaller than the current decision threshold R (step 404). On the other hand, the controller 102 judges that decoding of the next video frame F.sub.n+1 should be performed when the indication data S1 is not smaller than the current decision threshold R (step 408).
[0071] The decision threshold R(k) may be adaptively updated according to above formulas (11)-(13) for better video decoding performance. However, this is for illustrative purposes only, and is not meant to be a limitation of the present invention. That is, the spirit of the present invention is obeyed as long as the video decoder capability of the video decoder is referenced for determining the decision threshold R(k).
[0072] In an alternative design, the decision thresholds may be set or adaptively updated for different frame types, respectively. That is, the controller 106 sets the decision threshold R(k) according to the status of the video frame buffer 108 and a frame type of the next video frame F.sub.n+1. By way of example, but not limitation, the aforementioned threshold functions (i.e., formulas (11)-(13)) for one frame type are different from that for another frame type.
[0073] As mentioned above, the specific video characteristics used for determining the indication data may be DCT coefficients or macroblock types. Therefore, the aforementioned formula (1) can be modified to accumulate the DCT coefficients, instead of motion vectors, of the current video frame F.sub.n when the specific video characteristics are DCT coefficients. The larger is the accumulation value of the DCT coefficients of the current video frame F.sub.n, the complexity of the current video frame relative to previous video frame(s) is higher. Similarly, the aforementioned formula (1) can be modified to count intra-coded blocks in the current video frame F.sub.n when the specific video characteristics are macroblock types. The larger is the accumulation value of the intra-coded blocks of the current video frame F.sub.n, the complexity of the current video frame relative to previous video frame(s) is higher. In addition, when the specific video characteristics used for determining the indication data are DCT coefficients/macroblock types, the aforementioned formula (2) can be modified to calculate a weighted average value, and the aforementioned formula (3) can be modified to obtain the desired indication data S1. As a person skilled in the art can readily understand details of calculating the indication data according to the specific video characteristics being DCT coefficients/macroblock types after reading above paragraphs directed to calculating the indication data S1 according to the specific video characteristics being motion vectors, further description is omitted here for brevity.
[0074]
[0075] Please refer to
[0076] Step 702: Read a specific parameter from a frame header included in a bitstream of a current video frame.
[0077] Step 704: Generate indication data according to the specific parameter.
[0078] Step 706: Determine a decision threshold according to at least the video decoder capability of a video decoder.
[0079] Step 708: Compare the indication data with the decision threshold and accordingly generate a comparison result.
[0080] Step 710: Control the video decoder to decode or skip the current video frame according to the comparison result.
[0081] In this exemplary embodiment, the indication data estimating unit 604 obtains the indication data S2 by performing steps 702 and 704. More specifically, the indication data estimating unit 604 generates the indication data S2 by calculating a weighted average value of the specific parameter and a historical average value derived from previous video frame(s), and determines the indication data S2 according to the specific parameter and the weighted average value. In one exemplary implementation, the indication data S2 transmitted to the controller 606 may be a value indicative of a ratio between the specific parameter and the weighted average value. In another exemplary implementation, the indication data S2 transmitted to the controller 606 may include the specific parameter and the weighted average value.
[0082] By way of example, but not limitation, the specific parameter used for determining the indication data may be a bitstream length/frame length of the current video frame F.sub.n. Therefore, after the bitstream length L.sub.F.sub.
L.sub.T.sub.
[0083] In above formula (14), α′ represents a weighting vector. The historical average value L.sub.T.sub.
[0084] Next, the indication data estimating unit 604 determines the indication data S2 according to the weighted average value L.sub.T.sub.
[0085] As can be seen from formula (15), the indication data S2 may be regarded as a result of comparing the bitstream length of the current video frame with the historical statistics of bitstream lengths of previous video frames. The controller 606 controls the video decoder 602 to decode or skip the current video frame F.sub.n by performing steps 706-710. Thus, the controller 606 decides whether the current video frame F.sub.n will be skipped or decoded by referring to the result of comparing the bitstream length of the current video frame with the historical statistics of bitstream lengths of previous video frames. In this exemplary embodiment, the controller 606 determines a decision threshold R′ according to at least the video decoder capability of the video decoder 602, and controls the video decoder 602 to decode or skip the current video frame F.sub.n according to a comparison result derived from the indication data S2 and the decision threshold R′. For example, the controller 606 compares the indication data S2 with the decision threshold R′ and accordingly generates a comparison result, and controls the video decoder 602 to decode or skip the current video frame F.sub.n according to the comparison result.
[0086] As mentioned above, certain factors/parameters may reflect the video decoder capability of the video decoder 602. For example, the controller 606 may set the decision threshold R′ according to a ratio between a video decoder frame rate R1 and an input video frame rate R2 (e.g.,
or set the decision threshold R′ according to a status of a video frame buffer 608 utilized for buffering decoded video frames generated from decoding video frames.
[0087] In an alternative design, the decision thresholds may be set or adaptively updated for different frame types, respectively. Therefore, the controller 606 sets the decision threshold R′ according to the ratio between the video decoder frame rate and the input video frame rate and a frame type of the current video frame F.sub.n, or sets the decision threshold R′ according to the status of the video frame buffer 608 and the frame type of the current video frame F.sub.n.
[0088] Please refer to
[0089] Step 802: Check if the indication data S2 is smaller than the decision threshold R′. If yes, go to step 804; otherwise, go to step 812.
[0090] Step 804: Control the video decoder 602 to skip the current video frame F.sub.n.
[0091] Step 806: Check if the video decoder capability of a video decoder 602 does not match (e.g., lower than) an expected video decoder capability. If yes, go to step 808; otherwise, go to step 810.
[0092] Step 808: Adjust the decision threshold R′ referenced for determining whether to decode or skip the next video frame F.sub.n+1.
[0093] Step 810: Set the next video frame F.sub.n+1 as a current video frame to be decoded. Go to step 702.
[0094] Step 812: Control the video decoder 602 to decode the current video frame F.sub.n.
[0095] Step 814: Check if the video decoder capability of the video decoder 602 does not match (e.g., higher than) the expected video decoder capability. If yes, go to step 816; otherwise, go to step 810.
[0096] Step 816: Adjust the decision threshold R′ referenced for determining whether to decode or skip the next video frame F.sub.n+1. Go to step 810.
[0097] Please refer to
[0098] Step 902: Check if the indication data S2 is smaller than the decision threshold R′(i). If yes, go to step 904; otherwise, go to step 908.
[0099] Step 904: Control the video decoder 602 to skip the current video frame F.sub.n.
[0100] Step 906: Set the next video frame F.sub.n+1 as a current video frame to be decoded. Go to step 702.
[0101] Step 908: Control the video decoder 102 to decode the current video frame F.sub.n. Go to step 906.
[0102] It should be noted that the aforementioned rules of determining the decision threshold R/R(k) may be employed for determining the decision threshold R′/R′(i). As a person skilled in the art can readily understand details of the steps in
[0103] In above exemplary embodiments, the indication data estimating unit 104/604 determines the indication data S1/S2 by the ratio between the accumulation value and the weighted average accumulation value/the ratio between the weighted average value and the bitstream length. However, in an alternative design, the indication data estimating unit 104/604 may output the indication data S1/S2, including the accumulation value and the weighted average accumulation value/the weighted average value and the bitstream length, to the following controller 106/606. Next, the controller 106/606 checks a comparison result derived from the indication data S1/S2 (which includes the accumulation value and the weighted average accumulation value/the weighted average value and the bitstream length) and the decision threshold R/R′ to thereby determine if the next video frame/the current video frame should be skipped or decoded. This also obeys the spirit of the present invention and falls within the scope of the present invention.
[0104] Consider a case where the controller 106/606 decides that a specific video frame (e.g., the next video frame in the aforementioned signal processing apparatus 100 or the current video frame in the aforementioned signal processing apparatus 600) should be skipped. In one exemplary design, if the skipped specific video frame is a P-frame or B-frame, the display apparatus may display a decoded video frame generated from decoding a video frame preceding the specific video frame again during a period in which a decoded video frame generated from decoding the specific video frame is originally displayed. In another exemplary design, if the skipped specific video frame is a B-frame, the display apparatus may display a decoded video frame generated from decoding a video frame following the specific video frame during a period in which a decoded video frame generated from decoding the specific video frame is originally displayed. In yet another exemplary design, the display apparatus may directly skip the video playback associated with the specific current video frame, thereby increasing the playback speed. This may be employed when the video playback delay occurs or the fast-forward operation is activated.
[0105]
[0106] Please refer to
[0107] In this exemplary embodiment, the controller 1006 may estimate a time period T between a video display time point TP1 of a decoded video frame of the video frame P.sub.3 preceding the video frame P.sub.4 and a video display time point TP2 of a decoded video frame corresponding to the particular video frame I.sub.n, and then adjust the original video display timestamp of each of the decoded video frames available in the video frame buffer 1008 according to the time period T. For example, the adjusted display time points of these decoded video frames in the video frame buffer 1008 may be evenly distributed within the time period T.
[0108] Consider another case where the decoded video frame of the input video frame P.sub.3 has been outputted from the video frame buffer 1008 for video playback and the next input video frame P.sub.4 is not decoded yet. Therefore, the video frame buffer 1008 becomes empty, and the video playback and the audio playback would be out of synchronization. After the video frame buffer 1008 becomes empty (i.e., after the video playback and the audio playback are out of synchronization), the controller 1006 allows the video decoder 1002 to decode some input video frames (e.g., P.sub.4, I.sub.2, P.sub.5, and B.sub.1), and then controls the video decoder 1002 to skip the following video frames P.sub.6-P.sub.m for re-synchronizing the video playback and the audio playback. In other words, due to the frame skipping action, the video decoder 1002 will start to decode the particular video frame I.sub.n immediately after the decoding of the input video frame B.sub.1 is accomplished. The particular video frame I.sub.n may be an I-frame closest to the latest video frame B.sub.1 decoded by the video decoder 1002. However, in an alternative design, the skipped part of the video frames may have one or more I-frames included therein. Similarly, the controller 1006 may estimate a time period T between a video display time point TP1 of a decoded video frame of the video frame P.sub.3 preceding the video frame P.sub.4 and a video display time point TP2 of a decoded video frame corresponding to the particular video frame I.sub.n, and adjust the original video display timestamp of each of the decoded video frames (e.g., decoded video frames of input video frames P.sub.4, I.sub.2, P.sub.5, and B.sub.1) according to the time period T. For example, the adjusted display time points of these decoded video frames generated under a condition where the audio playback and video playback are out of synchronization may be evenly distributed within the time period T.
[0109] To put it simply, with the help of the adjustment made to the original video display timestamps of some decoded video frames, the video decoder 1002 can gain the decoding time period T′ available for generating decoded video frames to the video frame buffer 1008. In this way, at the end of the time period T, the audio playback and video playback may be synchronized again.
[0110] Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.