Enhanced distortion signaling for MMT assets and ISOBMFF with improved MMT QoS descriptor having multiple QoE operating points
09788078 · 2017-10-10
Assignee
Inventors
Cpc classification
H04N21/2404
ELECTRICITY
H04N21/234381
ELECTRICITY
H04N21/64738
ELECTRICITY
H04N21/2343
ELECTRICITY
H04N21/2662
ELECTRICITY
H04N21/64792
ELECTRICITY
International classification
H04N7/173
ELECTRICITY
H04N21/2662
ELECTRICITY
H04N21/24
ELECTRICITY
H04N21/2343
ELECTRICITY
H04N21/647
ELECTRICITY
Abstract
A method for providing media content in a computer network includes storing the media content, where the media content includes a segment having a group of frames. The method also includes determining a transmission rate for traffic to a client device. The method further includes selecting a subset of frames to drop from the group of frames based on (i) the transmission rate and (ii) a frame difference distortion (FDIFF) metric of each frame in the subset of frames. The method also includes shaping the segment by dropping the selected subset of frames from the group of frames, where the shaped segment has a lower bitrate than the segment. In addition, the method includes transmitting the shaped segment to the client device.
Claims
1. An apparatus for providing media content in a computer network, the apparatus comprising: a memory configured to store the media content, the media content including a segment comprising a group of frames; and at least one processing device configured to: determine a transmission rate for traffic between the apparatus and a client device; select a subset of frames to drop from the group of frames based on (i) the transmission rate and (ii) a frame difference distortion (FDIFF) metric of each frame in the subset of frames; calculate a frame significance (FSIG) value that indicates a relative importance of the frames in the group of frames in a sequence, wherein the FSIG value is defined according to an equation:
v.sub.k=d(f.sub.k, f.sub.k−1)=(S.Math.f.sub.k−S.Math.f.sub.k−1).sup.TA.sup.TA(S.Math.f.sub.k−S.Math.f.sub.k−1), where v.sub.k is a vector representation of the FSIG value, f.sub.k denotes a k.sup.th frame, f.sub.k−1, denotes a previous frame, S denotes a bi-cubicle filtering and down-sampling function, and A denotes a metric, shape the segment by dropping the selected subset of frames from the group of frames, wherein the shaped segment has a lower bitrate than the segment; and initiate transmission of the shaped segment to the client device.
2. The apparatus of claim 1, wherein the FSIG value defines a temporal distortion of a frame corresponding to a timestamp t if the frame corresponding to the timestamp is lost.
3. The apparatus of claim 2, wherein the at least one processing device is configured to select, from the group of frames, a frame whose FSIG value is less than a threshold significance level.
4. The apparatus of claim 1, wherein the at least one processing device is further configured to determine a sequence activity level within the segment using a differential significance of frames in the group of frames.
5. The apparatus of claim 1, wherein the at least one processing device is further configured to: receive the QoS description and the QoE description aggregated as the ADC, wherein each set of QoE parameters includes: a frame drop index set indicating a list of frames to drop from the group of frames yielding a bitrate reduction corresponding to the associated operating point, an aggregate bitrate resulting from the bitrate reduction obtained by dropping the frames indicated by the frame drop index set, a spatial distortion metric of the segment, and a frame significance weighted frame loss temporal distortion metric of the segment based on FSIG values; and select the operating point whose aggregate bitrate is closest to the transmission rate, wherein the at least one processing device is configured to select the subset of frames as the list of frames indicated by the frame drop index set of the selected operating point.
6. An apparatus for providing media content in a computer network, the apparatus comprising: a memory configured to store the media content, the media content including a segment comprising a group of frames; and at least one processing device configured to: determine a transmission rate for traffic between the apparatus and a client device; select a subset of frames to drop from the group of frames based on (i) the transmission rate and (ii) a frame difference distortion (FDIFF) metric of each frame in the subset of frames; calculate the FDIFF metric according to an equation:
d(f.sub.j, f.sub.k)=(S.Math.f.sub.j−S.Math.f.sub.k).sup.TA.sup.TA(S.Math.f.sub.j−S.Math.f.sub.k), where f.sub.j denotes a frame actually being displayed, f.sub.k denotes a frame scheduled to be displayed, S denotes a bi-cubicle filtering and down-sampling function, and A denotes a metric; shape the segment by dropping the selected subset of frames from the group of frames, wherein the shaped segment has a lower bitrate than the segment; and initiate transmission of the shaped segment to the client device.
7. The apparatus of claim 6 wherein the at least one processing device is configured to select, from the group of frames, a frame whose FDIFF metric is less than a threshold distortion level.
8. An apparatus for providing media content in a computer network, the apparatus comprising: a memory configured to store the media content, the media content including a segment comprising a group of frames; and at least one processing device configured to: determine a transmission rate for traffic between the apparatus and a client device; select a subset of frames to drop from the group of frames based on (i) the transmission rate and (ii) a frame difference distortion (FDIFF) metric of each frame in the subset of frames; shape the segment by dropping the selected subset of frames from the group of frames, wherein the shaped segment has a lower bitrate than the segment; and initiate transmission of the shaped segment to the client device wherein: a frame loss temporal distortion (FLTD) metric is an F SIG-weighted FLTD metric based on frame significance (F SIG) values; and the at least one processing device is configured to calculate the F SIG-weighted FLTD metric according to an equation:
9. The apparatus of claim 8, wherein the at least one processing device is further configured to calculate the FSIG-weighted FLTD metric based on a number of dependent frames having a decoding dependency from an encoded frame in order to increase the FSIG-weighted FLTD metric for each of the dependent frames.
10. A system for providing media content in a computer network, the system comprising: a memory configured to store the media content, the media content including a segment comprising a group of frames; and at least one processing device configured to: generate multiple operating points of bitrate reduction by performing a gradient search for each of the operating points; generate a set of Quality of Experience (QoE) parameters for each of the operating points, wherein the set QoE parameters for each of the operating points includes: a frame drop index set indicating a list of frames that when dropped from the group of frames yield (i) an aggregate bitrate reduction corresponding to the associated operating point and (ii) a shaped segment containing remaining frames; and at least one of: an aggregate bitrate resulting from dropping the frames indicated by the frame drop index set, a spatial distortion metric of the segment, and a frame loss temporal distortion (FLTD) metric of the segment; calculate a frame significance (FSIG) value that indicates a relative importance of the frames in the group of frames in a sequence, the FSIG value defined according to an equation:
v.sub.k=d(f.sub.k, f.sub.k−1)=(S.Math.f.sub.k−S.Math.f.sub.k−1).sup.TA.sup.TA(S.Math.f.sub.k−S.Math.f.sub.k−1), where v.sub.k is a vector representation of the FSIG value, f.sub.k denotes a k.sup.th frame, f.sub.k−1 denotes a previous frame, S denotes a bi-cubicle filtering and down-sampling function, and A denotes a metric; and initiate transmission of an Asset Delivery Characteristic (ADC) of the media content, the ADC including the operating points and the set of QoE parameters corresponding to each of the operating points.
11. The system of claim 10, wherein the at least one processing device is further configured to calculate for each frame in the segment: a frame rate reduction; a frame distortion; and a frame loss gradient comprising a ratio of the frame distortion to the frame rate reduction.
12. The system of claim 11, wherein the at least one processing device is configured, when performing the gradient search for each of the operating points, to: sort the group of frames by frame loss gradient; and generate a list of frames that when dropped from the group of frames yields an aggregate bitrate reduction corresponding to the operating point, the at least one processing device configured to generate the list of frames by (i) adding a frame having a smallest frame loss gradient to the list of frames and (ii) repeatedly adding a next frame from the sorted group of frames to the list of frames until a sum of the frame rate reductions of the frames added to the list of frames is at least the bitrate reduction corresponding to the operating point.
13. The system of claim 10, wherein the frame drop index set comprises one of: an index of each remaining frame; and an index of the frames to be dropped from the group of frames.
14. The system of claim 10, wherein the at least one processing device is configured to select, from the group of frames, a frame having at least one of: an FSIG value less than a threshold significance level; and a FLTD metric less than a threshold distortion level.
15. The system of claim 10, wherein: the FLTD metric is an FSIG-weighted FLTD metric; and the at least one processing device is configured to calculate the F SIG-weighted FLTD metric according to an equation:
16. The system of claim 15, wherein the at least one processing device is further configured to calculate the FSIG-weighted FLTD metric based on a number of dependent frames having a decoding dependency from an encoded frame in order to increase the FSIG-weighted FLTD metric for each of the dependent frames.
17. The system of claim 10, further comprising a router configured to: receive the transmitted ADC; determine a transmission rate for traffic between the router and a client device; select a subset of frames to drop from the group of frames based on (i) the transmission rate and (ii) a frame loss temporal distortion (FLTD) metric of each frame in the subset of frames; shape the segment by dropping the selected subset of frames from the group of frames, wherein the shaped segment has a lower bitrate than the segment; and initiate transmission of the shaped segment to the client device.
18. The system of claim 17, wherein: the router is further configured to select the operating point whose aggregate bitrate is closest to the transmission rate; and the router is configured to select the subset of frames as a list of frames indicated by a frame drop index set associated with the selected operating point.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) For a more complete understanding of this disclosure, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
DETAILED DESCRIPTION
(17)
(18) The following documents and standards descriptions are hereby incorporated by reference into this disclosure as if fully set forth herein: ISO/IEC JTC 1/SC29 IS 23008-1, Info Technology—High efficiency coding and media delivery in heterogeneous environments—part 1: MPEG media transport (MMT) (“REF1”); ISO/IEC DIS 23008-10: Information technology—High efficiency coding and media Delivery in heterogeneous environments—Part 10: MPEG Media Transport Forward Error Correction (FEC) codes (“REF2”); Wang et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, April 2004 (“REF3”); and ISO/IEC JTC1/SC29/WG11/MPEG2013/N13992, Reznik et al., “WD of ISO/IEC XXXXX-Y: Carriage of quality-related information in the ISO Based Media File Format” (“REF4”).
(19) In MMT, a media fragment unit (MFU) priority field enables single flow streaming optimization but does not capture the distortion induced by frame drop. Also, without a common quality of experience (QoE) metric regarding frame drops, it is difficult for computer networks to make content-aware decisions on traffic shaping. That is, an Asset Delivery Characteristic (ADC) includes a bit stream description and a quality of service (QoS) description but lacks a QoE description.
(20) This disclosure modifies the MMT hint track to carry information of visual impacts if loss distortion incurs. A frame loss distortion metric (also referred to as a frame drop distortion metric) measures the frame drop visual impact from training a distance metric. A frame loss distortion metric can also account for decoding dependency. As described below, frame loss distortion metrics that characterize a temporal distortion per sample (such as per frame, per segment, or per group of frames) of a video sequence provide technical advantages of finer granularity of ADC signaling and enables an ADC signal to include multiple QoE operating points with associated media fragment unit (MFU) index and bitrate.
(21) This disclosure provides systems and methods that increase the degree of freedom of video adaptation in streaming. Embodiments of this disclosure provide a distortion signaling mechanism in an MMT hint track to characterize the distortion from frame drops. Embodiments of this disclosure provide a packet loss-induced distortion metric that characterizes the impact a combination of frame drops has on a human's visual perception. The packet loss-induced distortion metric is used to optimize multi-flow streaming over a bottleneck in a communication link. For example, the packet loss-induced distortion metric is a tool for optimizing the streaming time and for supporting content-aware video adaptation in modern media transport, especially for coordinating multi-flow video sessions at the bottleneck. A unified distortion metric that can measure the consequences of packet loss provides technical advantages to a set of well-established optimization tools in networking. The packet loss-induced distortion metric can be a new field added to the ISOBMFF document of REF4 as part of the quality information. That is, the frame drop-induced distortion including an accounting for decoding dependence enables more advanced content-aware video networking solutions.
(22) As an example, a QoE metric labeling for packet loss consequences or delay consequences and finer (than asset level) granularity of operation is more suitable for stateless routers. The QoE metric labeling can be used to support content-aware video traffic shaping and routing in modern content delivery networks (CDN). To facilitate more intelligent video queue pruning and more intelligent packet dropping operations, this disclosure modifies the MMT ADC to operate at a MOOF segment level and modifies the MMT ADC to include a spatio-temporal QoE quality metric field and a QoE operating points descriptor. According to this disclosure, the ADC (including QoE information) is transmitted at a finer (more granular) segment level that is better suited for the “memory-less” characteristic of routers. For example, the MMT ADC (including QoE information) enables a multi-QoS traffic shaping with different spatio-temporal distortion levels to allow for both single flow QoE optimization for the given QoS, and multi-flow QoE optimization at a bottleneck.
(23)
(24) As shown in
(25) The network 102 facilitates communications between at least one server 104 and various client devices 106-114. Each server 104 includes any suitable computing or processing device that can provide computing services for one or more client devices. Each server 104 could, for example, include one or more processing devices, one or more memories storing instructions and data, and one or more network interfaces facilitating communication over the network 102.
(26) Each client device 106-114 represents any suitable computing or processing device that interacts with at least one server or other computing device(s) over the network 102. In this example, the client devices 106-114 include a desktop computer 106, a mobile telephone or smartphone 108, a personal digital assistant (PDA) 110, a laptop computer 112, and a tablet computer 114. However, any other or additional client devices could be used in the computing system 100.
(27) In this example, some client devices 108-114 communicate indirectly with the network 102. For example, the client devices 108-110 communicate via one or more base stations 116, such as cellular base stations or eNodeBs. Also, the client devices 112-114 communicate via one or more wireless access points 118, such as IEEE 802.11 wireless access points. Note that these are for illustration only and that each client device could communicate directly with the network 102 or indirectly with the network 102 via any suitable intermediate device(s) or network(s).
(28) As described in more detail below, the computing system 100 generates metrics that characterize the temporal distortion per sample of a video sequence. This can be used, for example, to provide an improved MMT QoS descriptor having multiple QoE operating points.
(29) Although
(30)
(31) As shown in
(32) The processing device 210 executes instructions that may be loaded into a memory 230. The processing device 210 may include any suitable number(s) and type(s) of processors or other devices in any suitable arrangement. Example types of processing devices 210 include microprocessors, microcontrollers, digital signal processors, field programmable gate arrays, application specific integrated circuits, and discreet circuitry.
(33) The memory 230 and a persistent storage 235 are examples of storage devices 215, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis). The memory 230 may represent a random access memory or any other suitable volatile or non-volatile storage device(s). The persistent storage 235 may contain one or more components or devices supporting longer-term storage of data, such as a ready only memory, hard drive, Flash memory, or optical disc.
(34) The communications unit 220 supports communications with other systems or devices. For example, the communications unit 220 could include a network interface card or a wireless transceiver facilitating communications over the network 102. The communications unit 220 may support communications through any suitable physical or wireless communication link(s).
(35) The I/O unit 225 allows for input and output of data. For example, the I/O unit 225 may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device. The I/O unit 225 may also send output to a display, printer, or other suitable output device.
(36) Note that while
(37) As shown in
(38) The RF transceiver 310 receives, from the antenna 305, an incoming RF signal transmitted by another component in a system. The RF transceiver 310 down-converts the incoming RF signal to generate an intermediate frequency (IF) or baseband signal. The IF or baseband signal is sent to the RX processing circuitry 325, which generates a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal. The RX processing circuitry 325 transmits the processed baseband signal to the speaker 330 (such as for voice data) or to the main processor 340 for further processing (such as for web browsing data).
(39) The TX processing circuitry 315 receives analog or digital voice data from the microphone 320 or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the main processor 340. The TX processing circuitry 315 encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. The RF transceiver 310 receives the outgoing processed baseband or IF signal from the TX processing circuitry 315 and up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna 305.
(40) The main processor 340 can include one or more processors or other processing devices and execute the basic OS program 361 stored in the memory 360 in order to control the overall operation of the client device 300. For example, the main processor 340 could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceiver 310, the RX processing circuitry 325, and the TX processing circuitry 315 in accordance with well-known principles. In some embodiments, the main processor 340 includes at least one microprocessor or microcontroller.
(41) The main processor 340 is also capable of executing other processes and programs resident in the memory 360, such as operations for generating metrics that characterize the temporal distortion per sample of a video sequence. The main processor 340 can move data into or out of the memory 360 as required by an executing process. In some embodiments, the main processor 340 is configured to execute the applications 362 based on the OS program 361 or in response to signals received from external devices or an operator. The main processor 340 is also coupled to the I/O interface 345, which provides the client device 300 with the ability to connect to other devices such as laptop computers and handheld computers. The I/O interface 345 is the communication path between these accessories and the main controller 340.
(42) The main processor 340 is also coupled to the keypad 350 and the display unit 355. The operator of the client device 300 can use the keypad 350 to enter data into the client device 300. The display 355 may be a liquid crystal display or other display capable of rendering text and/or at least limited graphics, such as from web sites.
(43) The memory 360 is coupled to the main processor 340. Part of the memory 360 could include a random access memory (RAM), and another part of the memory 360 could include a Flash memory or other read-only memory (ROM).
(44) As described in more detail below, the computing system generates frame loss temporal distortion (FLTD) metrics that characterize the temporal distortion per sample of a video sequence. This can be used, for example, to support an improved MMT QoS descriptor having multiple QoE operating points.
(45) Although
(46)
(47) In
(48) The level of temporal distortion is content-dependent. For example, frame drops will induce a small level of distortion in stationary sequences having relatively little video sequence activity, yet frame drops will induce larger distortions for more active video sequences. A mean square error (MSE) metric or an SSIM metric are not good metrics for determining temporal distortion as a simple one-half picture element (pel) global motion shift will result in a large MSE difference while a human's perception of the change is actually very small. REF3 describes that a thumbnail image Eigen appearance space metric is used to capture this distortion perceived by a human. That is, the Eigen appearance space metric is an effective metric for characterizing human perceptions and visual impacts.
(49) The FLTD incurred is a consequence of the event when a frame f.sub.j (such as a previous frame that is not dropped) is actually shown to the user and the frame f.sub.k corresponding to the timestamp t.sub.k would be shown to the user unless the frame f.sub.k is dropped. The FLTD incurred for frames f.sub.j and f.sub.k can be a frame difference distortion (FDIFF) that is computed according to the differential function d(f.sub.j, f.sub.k) expressed as follows:
d(f.sub.j,f.sub.k)=(S.Math.f.sub.j−S.Math.f.sub.k).sup.TA.sup.TA(S.Math.f.sub.j−S.Math.f.sub.k) (1)
In Equation (1), S is the bi-cubicle smoothing and down-sampling operator that brings the frames to an h×w thumbnail, and the distance metric A is the 192×k Eigen appearance projection with a basis from the largest k eigenvalue principle components. For example, an 8-bit integer value is computed from a scaling function within Equation (1) in combination with a projection function within Equation (1). The scaling function is where the bi-cubicle smoothing and down-sampling scales the frame to a thumbnail having a height (h=12) and a width (w=16). The projection function is where the distance is projected by the Eigen appearance projection according to the 12×196 subspace of the A metric.
(50)
(51) In
(52)
(53)
(54)
(55) The frame significance (FSIG) characterizes the relative importance of frames in a video sequence, and the sequence level visual impact from various combinations of frame losses (such as from dropping a temporal layer) can be estimated from the frame differential distance d(f.sub.k, f.sub.k−1) (which denotes a representation of the frame significance). By definition, the frame significance value v.sub.k for a sequence of frames {f.sub.1, f.sub.2, . . . f.sub.n) can be expressed as follows:
v.sub.k=d(f.sub.k,f.sub.k−1) (2)
In Equation (2), d(f.sub.k, f.sub.k−1) is the frame difference function of two successive frames in the sequence. It is the differential function d(f.sub.j, f.sub.k) of Equation (1) that represents the rate of change in the sequence and is calculated from the Eigen appearance metric of the scaled thumbnails of the frames.
(56) In
v.sub.k=d(f.sub.k,f.sub.k−1)=(S.Math.f.sub.k−S.Math.f.sub.k−1).sup.TA.sup.TA(S.Math.f.sub.k−S.Math.f.sub.k−1) (3)
(57) As an example, in the segment represented by
(58)
(59)
(60) To conform with the quality-related information format described in REF4, the FLTD information according to embodiments of this disclosure is based on the temporal significance profile and the differential frame difference. That is, in shaping an original segment to form a shaped segment, if the subset of frames dropped from a GOF only includes non-consecutive B-frame losses, the computation of the B-frame loss is a straightforward method of simply looking up the frame significance profile (shown in
d(f.sub.k,f.sub.k−p)=Σ.sub.j=1.sup.pe.sup.−a(j−1)d(f.sub.t−k+1,f.sub.t−k) (4)
Here, the kernel a reflects the temporal masking effects and can be obtained from training to suit different user preferences, t represents the timestamp of a frame shown to the user, p represents a number of frames prior to the k.sup.th frame, k represents the frame index of the frame that would be shown to the user (at the corresponding timestamp t=k) unless the frame f.sub.k is dropped.
(61) Table 1 shows an example comparison of the temporal distortion and the approximated temporal distortion d(f.sub.k, f.sub.k−p) from the differential profile where frames {f.sub.t, f.sub.t+1, f.sub.t+2} are lost.
(62) TABLE-US-00001 TABLE 1 Distortion and Approximated Distortion for Consecutive Frame Losses Time stamp Distortion Approximated Distortion t d(f.sub.t, f.sub.t−1) d(f.sub.t, f.sub.t−1) t + 1 d(f.sub.t+1, f.sub.t−1) d(f.sub.t+1, f.sub.t) + e.sup.−ad(f.sub.t, f.sub.t−1) t + 2 d(f.sub.t+2, f.sub.t−1) d(f.sub.t+2, f.sub.t−1) + e.sup.−ad(f.sub.t+1, f.sub.t) + e.sup.−2ad(f.sub.t, f.sub.t−1)
In some embodiments, for the most recent three frame losses, the set of weights {1.0, 0.5, 0.25} replace the set of values {e.sup.0, e.sup.−a, e.sup.−2a} of the exponentially decaying function.
(63)
(64) As shown in
(65) The temporal distortion at timestamp t.sub.k+2 can be calculated according to various methods, where each method has a different level of precision. In one example, the temporal distortion at timestamp t.sub.k+.sub.2 can be calculated using Equation (4) to characterize the QoE impact of different temporal layers in the video sequence as follows.
d(f.sub.k,f.sub.k−p)=Σ.sub.j=1.sup.pe.sup.−a(j−1)d(f.sub.t−k+1,f.sub.t−k) (4)
d(f.sub.k+2,f.sub.k)=v.sub.k+2+e.sup.−av.sub.k+1, where k=k−p and p=2. (5)
In another example, the temporal distortion at timestamp t.sub.k+2 can be calculated as according to the sum of the temporal distortions corresponding to the individual lost frames. That is, the FLTD can be calculated according to Equation (6) below, where v.sub.k represents the FSIG of the k.sup.th frame. The FLTD calculated according to Equation (6) corresponds to an amount of distortion having a value represented by an area 920 (shown as a hatched five-sided polygon beneath the vectors 910 and 905).
d(f.sub.k,f.sub.k+2)=v.sub.k+2+v.sub.k+1 (6)
(66) In another example, the temporal distortion at timestamp t.sub.k+2 can be calculated according to the vector sum of the temporal distortions corresponding to the consecutively lost frame (f.sub.k+2) associated with the timestamp t.sub.k+2 and the frame actually displayed at the timestamp t.sub.k+2. That is, the FLTD can be calculated according to Equation (7) below, where {right arrow over (v.sub.k+2)} represents the vector 905, {right arrow over (v.sub.k+1)} represents the vector 910, and d(f.sub.k, f.sub.k+2) represents the vector 915. The FLTD calculated according to Equation (7) corresponds to an amount of distortion having a value represented by an area 925 (shown as a shaded trapezoid beneath the vector 915).
d(f.sub.k,f.sub.k+2)={right arrow over (v.sub.k+2+v.sub.k+1)} (7)
(67) In another example, the temporal distortion at timestamp t.sub.k+2 can be calculated as according to the absolute value of the projection function (such as defined by the expression A(S.Math.f.sub.j−S.Math.f.sub.k)) as expressed in Equation (8). Here, S is a low pass filter and down sampling function, and A is a distance metric that can be determined from QoE information.
d(f.sub.j,f.sub.k)=|A*S*f.sub.j−A*S*f.sub.k| (8)
(68) The amount of distortion calculated using Equation (4) closely approximates the amount of distortion that a human would perceive at the timestamps {t.sub.k, t.sub.k+1, t.sub.k+2}. By comparison, the amount of distortion calculated using Equation (6) overestimates the amount of distortion that a human would perceive at the timestamps {t.sub.k, t.sub.k+1, t.sub.k+2}, where the frames actually shown to the user were {f.sub.k, f.sub.k, f.sub.k} because the subset of frames { f.sub.k+1, f.sub.k+2} were dropped. By further comparison, the amount of distortion calculated using Equation (7) overestimates by a lesser amount than when Equation (6) is used. Also, the area 925 is less than the area 920, indicating that the amount of distortion associated with Equation (7) is less than the amount of distortion associated with Equation (6).
(69) Table 2 provides a MMT hint track that carries temporal distortion information according to this disclosure. The importance or significance of a frame can be measured by the amount of distortion (such as if-loss-incurred-distortion) that would be incurred if the frame is dropped or lost from the original segment GOP, which is the FSIG of the frame. According to this disclosure, the FSIG can be transmitted in a signal using only eight bits and can easily be fit into an existing video quality metric (VQME) scheme. For example, the MMT hint track described in REF1 is a suitable place to include the FSIG. More particularly, the semantics for the field on “priority” can be re-interpreted as the if-loss-incurred-distortion (FSIG). The if-loss-incurred-distortion can be quantized to an 8-bit unsigned int representation (shown in Table 2 as “unsigned int(8) priority”). Accordingly, the FSIG information is very useful in supporting content-aware frame drop decisions in streaming applications.
(70) TABLE-US-00002 TABLE 2 MMT Hint Track Carrying the Temporal Distortion Information aligned(8) class MMTHSample { unsigned int(32) sequence_number; if (is_timed) { signed int(8) trackrefindex; unsigned int(32) movie_fragment_sequence_number unsigned int(32) samplenumber; unsigned int(8) priority; unsigned int(8) dependency_counter; unsigned int(32) offset; unsigned int(32) length; multiLayerInfo( ); } else { unsigned int(16) item_ID; } }
(71) Table 3 shows a VQME box that provides per sample spatial quality information for N13992 according to this disclosure. The VQME box shown in Table 3 includes an if-loss-incurred-distortion metric that is computed from the differential distortion profile. The if-loss-incurred-distortion is shown in Table 3 as “unsigned int(8) priority”.
(72) TABLE-US-00003 TABLE 3 VQME Box Carrying the Temporal Distortion Information aligned(8) class QualityMetricsPresentBox extends FullBox(‘vqmp’, version, flags){ unsigned int(8) field_size_bytes; unsigned int(8) metric_count; for (i = 1 ; i <= entry_count ; i++){ unsigned int(32) metric_code; } }
(73)
(74) Modern video coding tools provide B-frames to facilitate frame drop as a way to adapt to a rate constraint. A selection to drop the same number of B frames from different content (such as the different documentaries of
(75) As shown in
(76) A similar consequence applies to the P-frame f.sub.5 from which the B-frames {f.sub.2, f.sub.3, f.sub.4} have a backward prediction decoding dependency and to the B-frames {f.sub.6, f.sub.7, f.sub.8} and the P-frame f.sub.9 that have a forward prediction decoding dependency. Also, a similar consequence applies to the P-frame f.sub.9 from which the B-frames {f.sub.6, f.sub.7, f.sub.8} have a backward prediction decoding dependency. Different type of frames will have different visual impacts because of the differences in decoding dependence. That is, a selection to drop B-frames (such as frames f.sub.2 and f.sub.3) incurs a localized temporal distortion.
(77)
(78)
(79) As described more particularly below, a QoE operating points descriptor can be added to the MMT ADC to enable multi-QoS traffic shaping with different spatio-temporal distortion levels for both single flow QoE optimization for the given QoS and multi-flow QoE optimization at a bottleneck. For streaming video applications, video coding tools perform streaming time adaption that operates the stream at multiple rate-distortion (R-D) points (such as certain combination of packets or MFU drops) that will result in different distortion consequences.
(80)
(81)
(82) Tables 3 and 4 provide syntax of an MMT ADC modified according to this disclosure to include operating points of rate reduction and a QoE descriptor for each operating point of rate reduction. Table 3 provides the operating points characteristics added to the MMT ADC. The MMT ADC shown in Table 3 includes multiple operating points specified with corresponding operating QoE parameters represented as “operationPointQuality,” associated MFUs specified in the “sampleGroupIndex,” and the resulting bit-rate represented by the “operationPointBitrate.”
(83) TABLE-US-00004 TABLE 3A MMT ADC including Operating Points Characteristics <xs:complexType name=“mmt:OperationPointCharacteristics”> <xs:attribute name=“sampleGroupIndex” type=“xs:integer”/> <xs:attribute name=“operationPointQuality” type=“xs:float”/> <xs:attribute name=“operationPointBitrate” type=“xs:integer”/> <xs:anyAttribute processContents=“lax”> </xs:complexType>
Table 4 provides the syntax of the MMT ADC including the operating points characteristics of Table 3.
(84) TABLE-US-00005 TABLE 4 MMT ADC Syntax Modified to Include R-D Operating Points <?xml version=“1.0” encoding=“UTF-8”?> <xs:schema xmlns:xs=“http://www.w3.org/2001/XMLSchema” targetNamespace=“mmt”> <xs:element name=“AssetDeliveryCharacteristic” type=“mmt:AssetDeliveryCharacteristicType”/> <xs:complexType name=“mmt:AssetDeliveryCharacteristicType”> <xs:sequence> <xs:element name=“asset” type=“mmt:AssetType” minOccurs=“1” maxOccurs=“unbounded”/> <xs:any processContent=“lax” namespacee=“##any”/> </xs:sequence> </xs:complexType> <xs:complexType name=“mmt:AssetType”> <xs:sequence> <xs:element name=“QoS_descriptor” type=“mmt:QoS_descriptorType”/> <xs:element name=“timeSegment” type=“mmt:TimeSegment” minOccurs=“1” maxOccurs=“unbounded”/> </xs:sequence> </xs:complexType> <xs:complexType name=“mmt:TimeSegment”> <xs:sequence> <xs:element name=“Bitstream_descriptor” type=“mmt:Bitstream_descriptorType”/> <xs:element name=“operationPointCharacteristics” type=“mmt:OperationPointCharacteristics” minOccurs=“0” maxOccurs=“unbounded”/> </xs:sequence> <xs:attribute name=“startTime” type=“xs:dateTime”/> <xs:attribute name=“duration” type=“xs:duration”/> </xs:complexType> <xs:complexType name=“mmt:OperationPointCharacteristics”> <xs:attribute name=“sampleGroupIndex” type=“xs:integer”/> <xs:attribute name=“operationPointQuality” type=“xs:float”/> <xs:attribute name=“operationPointBitrate” type=“xs:integer”/> <xs:anyAttribute processContents=“lax”> </xs:complexType> <xs:complexType name=“mmt:QoS_descriptorType”> <xs:attribute name=“loss_tolerance” type=“xs:integer”/> <xs:attribute name=“jitter_sensitivity” type=“xs:integer”/> <xs:attribute name=“class_of_service” type=“xs:boolean”/> <xs:attribute name=“distortion_levels” type=“xs:integer”/> <xs:attribute name=“bidrection_indicator” type=“xs:boolean”/> </xs:complexType> <xs:complexType name=“Bitstream_descriptorType”> <xs:choice> <xs:complexType name=“Bitstream_descriptorVBRType”> <xs:attribute name=“sustainable_rate” type=“xs:float”/> <xs:attribute name=“buffer_size” type=“xs:float”/> <xs:attribute name=“peak_rate” type=“xs:float”/> <xs:attribute name=“max_MFU_size” type=“xs:integer”/> <xs:attribute name=“mfu_period” type=“xs:integer”/> </xs:complexType> <xs:complexType name=“Bitstream_descriptorCBRType”> <xs:attribute name=“peak_rate” type=“xs:float”/> <xs:attribute name=“max_MFU_size” type=“xs:integer”/> <xs:attribute name=“mfu_period” type=“xs:integer”/> </xs:complexType> </xs:choice> </xs:complexType> </xs:schema>
(85) The ADC describes multiple assets of the same content, which can be used by the MMT transmitting entity to select the appropriate encoding or to perform bitstream switching when appropriate. An ADC is connected to multiple assets, which are intended to be alternatives to each other.
(86) As video quality fluctuates over time, an accurate description of the bit-stream characteristics does not apply to the whole duration of the asset. The ADC modified according to this disclosure use time segments to provide the bit-stream description. The time segments are described by a corresponding start time inside the asset and a duration.
(87) Depending on the encoding structure, the media data can be transmitted according to a partial delivery, where only parts of the media data are delivered to the MMT receiving device (such as user equipment) in order to adjust the transmission rate to the available channel bandwidth. The media samples of a particular operation point are grouped together using a sample grouping mechanism in the ISOBMFF file. Each sample group can be associated with an indication of the expected quality when operating at that particular operation point. The indication of the expected quality can be provided as the resulting quality degradation when operating at the selected operation point. The sampleGroupIndex carries the group_description_index from the “sbgp” box that corresponds to the described operation point.
(88)
(89) In block 1505, the process includes storing media content. The media content includes at least one segment, where each segment has at least one group of frames. Each segment can be stored in a memory unit. In block 1510, the process includes determining a transmission rate for traffic to a client device. For example, the client device could indicate a bitrate at which the client device is able to receive data over a communication link. The network device transmitting the media content can determine a transmission rate based on the indication from the receiving client device.
(90) In block 1515, the process includes selecting a subset of frames to drop from the group of frames based on (i) the transmission rate and (ii) an FLTD metric of each frame in the subset of frames. The transmission rate indicates a bitrate to which the segment bitrate will be reduced. That is, a target bitrate reduction can be calculated as the difference between the bitrate of the segment and the transmission rate. Frames having a low FLTD metric can be selected for the subset of frames to drop first until the sum of the frame rate reductions of the subset rises to at least the target bitrate reduction.
(91) In block 1520, the process includes shaping the segment by dropping the selected subset of frames from the group of frames, where the shaped segment has a lower bitrate than the segment. In block 1525, the process includes transmitting the shaped segment to the receiving client device.
(92)
(93) In block 1605, the process 1515 includes calculating an FLTD metric for each frame of a segment. For example, the FLTD can be calculated using Equation (1), (4), (6), (7), or (8). In block 1610, the process includes determining a sequence activity level within the segment using a differential significance of frames in the GOP. For example, a frame significance of each frame indicates the sequence activity level within the segment. In block 1610, an FSIG value can be calculated using the frame difference function. In block 1615, the process includes selecting a frame having an FLTD metric that is less than a threshold distortion level and/or a frame having an FSIG value that is less than a threshold significance level.
(94) Although the figures above have described various systems, devices, and methods, various changes may be made to the figures. For example, the designs of various devices and systems could vary as needed or desired, such as when components of a device or system are combined, further subdivided, rearranged, or omitted and additional components are added. As another example, while various methods are shown as a series of steps, various steps in each method could overlap, occur in parallel, occur in a different order, or occur any number of times. In addition, the various graphs are for illustration only, and content having other characteristics can be used.
(95) Although this disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that this disclosure encompass such changes and modifications as fall within the scope of the appended claims.