FRAGMENT-ALIGNED AUDIO CODING
20220167031 · 2022-05-26
Inventors
- Bernd Czelhan (Happurg, DE)
- Harald Fuchs (Roettenbach, DE)
- Ingo HOFMANN (Nuernberg, DE)
- Herbert Thoma (Erlangen, DE)
- Stephan Schreiner (Birgland, DE)
Cpc classification
H04N21/242
ELECTRICITY
H04N21/23424
ELECTRICITY
H04N21/2335
ELECTRICITY
H04N21/8456
ELECTRICITY
G10L19/167
PHYSICS
H04N19/40
ELECTRICITY
International classification
H04N21/242
ELECTRICITY
H04N19/40
ELECTRICITY
H04N21/234
ELECTRICITY
H04N21/2343
ELECTRICITY
Abstract
Audio video synchronization and alignment or alignment of audio to some other external clock are rendered more effective or easier by treating fragment grid and frame grid as independent values, but, nevertheless, for each fragment the frame grid is aligned to the respective fragment's beginning. A compression effectiveness lost may be kept low when appropriately selecting the fragment size. On the other hand, the alignment of the frame grid with respect to the fragments' beginnings allows for an easy and fragment-synchronized way of handling the fragments in connection with, for example, parallel audio video streaming, bitrate adaptive streaming or the like.
Claims
1.-21. (canceled)
22. A decoder for decoding audio content from an encoded data stream, wherein the encoded data stream comprises encoded representations of temporal fragments of the audio content, each of which has encoded thereinto a respective temporal fragment of the audio content in units of audio frames temporally aligned to a beginning of the respective temporal fragment so that the beginning of the respective temporal fragment coincides with a beginning of a first audio frame of the audio frames, wherein the decoder is configured to decode reconstructed versions of the temporal fragments of the audio content from the encoded representations of the temporal fragments; and join, for playout, the reconstructed versions of the temporal fragments of the audio content together by truncating the reconstructed version of a predetermined temporal fragment at a portion of a trailing audio frame of the audio frames in units of which the predetermined temporal fragment is coded into the encoded representation of the predetermined temporal fragment, which temporally exceeds a trailing end of the predetermined temporal fragment, determining the portion of the trailing audio frame on the basis of truncation information in the encoded data stream, wherein the truncation information comprises a frame length value indicating a temporal length of the audio frames in units of which the predetermined temporal fragment is coded into the encoded representation of the predetermined temporal fragment, and a fragment length value indicating a temporal length of the predetermined temporal fragment from the beginning of the reconstructed version of the predetermined fragment to the fragment boundary with which the beginning of the reconstructed version of the succeeding temporal fragment coincides, and/or a truncation length value indicating a temporal length of the portion of the trailing audio frame or the difference between the temporal length of the portion of the trailing audio frame and the temporal length of the trailing audio frame.
23. The decoder according to claim 22, configured to, in decoding a further predetermined temporal fragment from the encoded representation of the further predetermined temporal fragment, generate the reconstructed version of the further predetermined temporal fragment within a portion of a trailing audio frame of the audio frames in units of which the further predetermined temporal fragment is coded into the encoded representation of the further predetermined temporal fragment, which portion extends from a leading end of the trailing audio frame up to the fragment boundary at which a reconstructed version of a succeeding temporal fragment abuts, by flushing internal decoder states as manifesting themselves up to an audio frame immediately preceding the trailing audio frame.
24. The decoder according to claim 22, configured to derive immediate playout information from the encoded representations of an even further predetermined temporal fragment, the immediate play-out information being related to the audio content at one or more pre-roll audio frames of the audio content which temporally precede(s) a beginning of the even further predetermined temporal fragment and use the immediate playout information so as to reconstruct the audio content at one or more audio frames of the even further predetermined temporal fragment immediately succeeding the beginning of the even further predetermined temporal fragment.
25. The decoder according to claim 24, configured such that the immediate playout information is a reconstruction of the audio content at the one or more pre-roll audio frames.
26. The decoder according to claim 24, configured to use the immediate playout information in reconstructing the audio content at the one or more audio frames of the even further predetermined temporal fragment immediately succeeding the beginning of the even further predetermined temporal fragment for time domain aliasing cancellation.
27. The decoder according to claim 22, configured to decode the audio frames individually using an inverse of a lapped transform causing aliasing and incurring transform windows extending beyond the frames' boundaries.
28. The decoder according to claim 22, configured to in decoding reconstructed versions of the temporal fragments of the audio content from the encoded representations of the temporal fragments, decode the reconstructed versions of the temporal fragment of the audio content alternatively from two decoding cores.
29. An encoder for encoding audio content into an encoded data stream, configured to: fragmentize the audio content into units of temporal fragments, encode each temporal fragment into an encoded representation of the respective temporal fragment in units of audio frames such that for each temporal fragment a beginning of a first audio frame and a beginning of the respective temporal fragment coincide, and wherein the encoded representations of the temporal fragments are comprised by the encoded data stream, wherein the encoder is configured to signal within the encoded data stream a truncation information for identifying a portion of a trailing audio frame of the audio frames in units of which a predetermined temporal fragment is encoded, which trailing audio frame exceeds a trailing end of the predetermined temporal fragment, wherein the truncation information comprises a frame length value indicating the temporal length of the audio frames and a fragment length value indicating the temporal length of the temporal fragments and/or a truncation length value indicating a temporal length of a portion of the trailing audio frame, which exceeds the trailing end of the predetermined temporal fragment, or the difference between the temporal length of the portion of the trailing audio frame and the temporal length of the trailing audio frame.
30. The encoder according to claim 29, configured such that, for a further predetermined temporal fragment, the encoding of the further predetermined temporal fragment into the encoded representation of the further predetermined temporal fragment is ceased at an audio frame immediately preceding a further trailing audio frame, which exceeds a trailing end of the further predetermined temporal fragment.
31. The encoder according to claim 30, configured to signal within the encoded representation of the further predetermined temporal fragment a flush signalization instructing a decoder to fill a portion of the further predetermined temporal fragment covered by the further trailing audio frame on the basis of flushing internal states of the decoder as manifesting themselves up to the audio frame immediately preceding the further trailing audio frame.
32. The encoder according to claim 29, configured to, for an even further predetermined temporal fragment, continue the encoding of the even further predetermined temporal fragment into the encoded representation of the even further predetermined temporal fragment beyond a trailing end of the even further predetermined temporal fragment within an even further trailing audio frame of the audio frames in units of which the even further predetermined temporal fragment is encoded, which exceeds the trailing end of the even further predetermined temporal fragment.
33. The encoder according to claim 32, configured to encode the audio content within a portion of the even further trailing audio frame, which exceeds the trailing end of the even further predetermined temporal fragment, at a lower quality than within the even further predetermined temporal fragment.
34. The encoder according to claim 29, configured to, in encoding an even even further predetermined temporal fragment, derive immediate playout information from one or more pre-roll audio frames of the audio content immediately preceding the first audio frame of the audio frames in units of which the even even further predetermined temporal fragment is encoded into the encoded representation of the even even further predetermined temporal fragment and encode the immediate playout information into the encoded representation of the even even further predetermined temporal fragment.
35. The encoder according to claim 34, configured to use, in encoding each temporal fragment into the encoded representation of the respective temporal fragment, transform coding on the basis of an aliasing introducing lapped transform and derive the immediate playout information by applying the transform coding on the basis of the aliasing introducing lapped transform onto the audio content at the one or more pre-roll audio frames.
36. The encoder according to claim 29, configured to engage, in encoding each temporal fragment into the encoded representation of the respective temporal fragment, two encoding cores alternately with the encoding the temporal.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
DETAILED DESCRIPTION OF THE INVENTION
[0030] Before describing various embodiments of the present application, the advantages provided by, and the thoughts underlying, these embodiments are described first. In particular, imagine that an audio content is to be coded so as to accompany a video frame composed of a sequence of video frames. The problem is as outlined above in the introductory portion of the present application: nowadays audio codecs operate on a sample and frame basis which is no integer fraction or integer multiple of the video framerate. Accordingly, the embodiments described hereinafter use encoding/decoding stages operating in units of “usual” frames for which they are optimized. On the other hand, the audio content is subject to the audio codec underlying these encoding/decoding stages in units of temporal fragments which may be one or more, advantageously one to five, or even more advantageously one or two video frames long. For each such temporal fragment, the frame grid is chosen to be aligned to the beginning of the respective temporal fragment. In other words, the idea underlying the subsequently described embodiments is to produce audio fragments which are exactly as long as the corresponding video frame, with this approach having two benefits:
[0031] 1) The audio encoder may still work on an optimized/native frame duration and does not have to leave its frame grid on fragment boundaries.
[0032] 2) Any audio delay may be compensated by the usage of immediate playout information for the encoded representations of the temporal fragments. Splicing can happen at each fragment boundary. This reduces the overall complexity of the broadcast equipment significantly.
[0033]
[0034]
[0035] For illustration purposes,
[0036]
[0037] The last audio frame of each audio fragment, here AU 46, is for example truncated to match the fragment duration. In the given example, the last audio frame reaches from sample 47104 to 48127 wherein a zero-based numbering has been chosen, i.e. the first audio sample in the fragment is numbered zero. This leads to a fragment size of a number of samples which is slightly longer than needed, namely 48128 instead of 48048. Therefore, the last frame is cut right after the 944.sup.th sample. This can be accomplished by using, for example, an edit list contained for example in the header data 24 or in the configuration data 26. The truncated part 16 can be encoded with less quality, for example. Alternatively, there would be the possibility to not transmit all audio frames 12, but to leave out, for example, the coding of the last frame, here exemplarily AU 46, since the decoder can normally be flushed depending on the audio configuration.
[0038] In the embodiments described further below, it will be shown that measures may be taken to counteract the problem that the decoder which operates, for example, on an overlapping windows function will lose its history and is not able to produce a full signal for the first frame of the following fragment. For that reason, the first frame, in
TABLE-US-00001 TABLE 1 Bitrate overhead Overhead No sbr Sbr 2:1 No sbr Sbr 2:1 No sbr Sbr 2:1 (worst-case) (1 sec) (1 sec) (2 sec) (2 sec) (0.5 sec) (0.5 sec) Fragment size (sec): 1.001 1.001 2.002 2.002 0.5005 0.5005 Frame size (samples): 1024 2048 1024 2048 1024 2048 Samplingrate: 48000 48000 48000 48000 48000 48000 Preroll (aus): 5 3 5 3 5 3 Normal aus/fragment: 46.921875 23.4609375 93.84375 46.921875 23.4609375 11.73046875 Aligned aus/fragment: 52 27 99 50 29 15 Overhead: 10.80% 15.10% 5.50% 6.60% 23.60% 27.90%.
[0039] The above table gives an example for the expected bitrate overhead if no optimization would be applied. It can be seen that the overhead depends strongly on the used fragment duration T.sub.fragment. Depending on the broadcaster's requirement, it is feasible to align only every second or third fragment, respectively, i.e. choosing the audio fragments to be longer.
[0040]
[0041] Before describing in detail the functionality of the encoder of
[0042] The decoder 60 further comprises a decoding stage 64 configured to decode reconstructed versions 66 of the temporal fragments 10 from the encoded representations 40. That is, decoding stage 64 outputs, for each temporal fragment 40, a reconstructed version 66 of the audio content as covered by the temporal fragment 10 to which the respective encoded representation 40 belongs.
[0043] The decoder 60 further comprises a joiner 68 configured to join, for playout, the reconstructed versions 66 of the temporal fragments 10 together with, inherently, aligning the beginnings of the reconstructed versions 66 of the temporal fragments so as to coincide with the fragment boundaries of the fragment grid, i.e. at the beginnings 30 of the fragment grid, as the individual frame grids of the fragments 10 are registered thereto.
[0044] Thus, encoder 20 and decoder 60 of
[0045] In the following, the possibility is discussed according to which the encoding stage 36 also attends to encoding the trailing frame 12a into the corresponding encoded representation 40, and that the decoder attends to a truncation of the corresponding overhanging portions of the reconstructed version 66. In particular, in accordance with this example, the encoding stage 36 and the fragment provider 38 may cooperate such that, for a current temporal fragment 10, the encoding of this temporal fragment 10 into the encoded representation 40 is continued beyond the trailing end 70 of the current temporal fragment 10 as far as the trailing frame 12.sub.a is concerned. That is, the encoding stage 36 also encodes the overhanging portion 16 of the audio content into the encoded representation 40. In doing so, however, the encoding stage 36 may shift the bitrate spent for encoding this trailing frame 12.sub.a into the encoded representation 40 from the overhanging portion 16 to the remaining portion of trailing frame 12.sub.a, i.e. the portion temporally overlapping with the current temporal fragment 10. For example, the encoding stage 36 may lower the quality at which the overhanging portion 16 is coded into the encoded representation 40 compared to the quality at which the other portion of trailing frame 12.sub.a is coded into the encoded representation 40, namely the one belonging to the current temporal fragment 10. In that case, the decoding stage 64 would accordingly decode from this encoded representation 40 a reconstructed version 66 of the corresponding temporal fragment 10 which temporally exceeds the temporal length of the temporal fragment 10, namely as far as the overhanging portion 16 of the trailing frame 12.sub.a is concerned. The joiner 68, in aligning the reconstructed version 66 with the fragmentation grid, i.e. with the fragments' beginnings 30, would truncate the reconstructed version 66 at the overhanging portion 16. That is, joiner 68 would disregard this portion 16 of the reconstructed version 66 in playout. The fact that this portion 16 might have been coded at lower quality as explained above, is accordingly transparent for the listener of the reconstructed audio content 31′, which is the result of the joining of the reconstructed versions 66 at the output joiner 68, as this portion is replaced, in playout, by the beginning of the reconstructed version of the next temporal fragment 10.
[0046] Alternatively, the encoder 20 may be operative to leave out the trailing frame 12.sub.a in encoding a current temporal fragment 10. Instead, the decoder may attend to fill the non-encoded portion of the temporal fragment 10, namely the one with which the trailing frame 12.sub.a partially overlaps, by flushing its internal state as described exemplarily further below. That is, the encoding stage 36 and fragment provider 38 may cooperate such that, for a current temporal fragment 10, the encoding of this temporal fragment into its encoded representation 40 is seized at the frame 12 immediately preceding the trailing frame 12.sub.a. The encoding stage may signal within the encoded representation 40 a flush signalization instructing the decoder to fill the remaining, thus non-encoded portion of the temporal fragment 10, namely the one which overlaps with the trailing frame 12.sub.a, by means of flushing internal states of the encoder as manifesting themselves up to the frame 12 immediately preceding the trailing frame 12.sub.a. At the decoder side, the coding stage 64 may be responsive to this flush signalization so as to, when decoding the corresponding encoded representation 40, generate the reconstructed version 66 of the temporal fragment 10 corresponding to this encoded representation 40 within the portion at which the temporal fragment 10 and a trailing frame 12.sub.a overlap by flushing its internal states of the decoding stage 64 as manifesting themselves up to the immediately preceding frame 12 of the trailing frame 12.sub.a.
[0047] In order to illustrate the flushing procedure in more detail, reference is made to
[0048] In
[0049] Different possibilities exist with respect to the manner in which the decoder 60 is informed of the size of overhanging portion 16. For example, the decoder 60 may be configured to convey truncation information related to this size within the data stream 34 by way of the truncation information comprising a frame length value and a fragment length value. The frame length value could indicate T.sub.frame and the fragment length value T.sub.fragment. Another possibility would be that the truncation length value indicates the temporal length of the overhanging portion 16 itself or the temporal length of the portion at which the temporal fragment 10 and the trailing frame 12.sub.a temporally overlap. In order to allow immediate playout of the reconstructed version 66 of each temporal fragment 10, the encoding stage 36 and fragment provider 38 may cooperate so that, for each temporal fragment 10, the encoded representation 40 is also provided with immediate playout information which relates to the portion 46 temporally preceding the respective temporal fragment 10. For example, imagine that the lapped transform referred to in
[0050] Although it has not been discussed in more detail above, it is noted that encoding stage 36 and/or decoding stage 64 could be composed of two or even more cores. For example,
[0051] Thus, in accordance with the embodiment of
[0052] The encoder is aware of the exact fragment duration. As explained above, in accordance with an embodiment, the overlapping audio part 16 may be encoded two times with different frame grids.
[0053] A brief statement is performed with respect to the “self-contained manner” at which the individual temporal fragments 10 are coded into their encoded representations 40. Although this self-contained manner could also pertain to configuration data such as coding parameters pertaining to more seldom changing data such as number of encoded audio channels or the like, so that each encoded representation 40 could comprise this configuration data, it would alternatively be possible that such seldom changing data, i.e. configuration data, is conveyed to the decoding side out of band, not within each encoded representation 40 instead of being included in each encoded representation 40. If included in the encoded representation, the configuration data may be transmitted in another transport layer. For example, the configuration may be transmitted in the initialization segment, and the IPF frame 12.sub.b of each temporal fragment could be freed from carrying the configuration data information.
[0054] As far as the decoding side is concerned, the above description of
[0055] Finally,
[0056] Thus, the above embodiments allow the delivery of audio and video content over a transmission channel with either fixed or variable bitrate and allow, in particular, audio video synchronization and enable advanced use-cases such as splicing. As mentioned above, the encoded data stream as encoded above, may also render easier a synchronization with other clocks such as clocks prescribed by other media signals. The encoders described above allow for an adaptation of an existing audio frame length. The length of the temporal fragments may be set depending on the application's needs. The encoder embodiments form the encoded data stream in tranches of encoded representation of the temporal fragments which may, for instance, but not exclusively, be made the subject of adaptive streaming by using these fragments as the fragments of a media representation. That is, the coded data stream, composed of the resulting fragments, may be offered to a client by server via an adaptive streaming protocol, and the client may retrieve the data stream fragments with, maybe, an add inserted thereinto, via the protocol and forward same to the decoder for decoding. But this is not mandatory. Rather, splicing may be advantageously be affected by the formation of the inventive encoded data stream even in other application scenarios. The above described embodiments may be implemented or used in connection with MPEG-H audio codec with the audio frames being MPEG-H audio frames, but the above embodiments are not restricted to the usage of this codec but may be adapted to all (modern) audio codecs.
[0057] Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus.
[0058] The inventive spliced or splicable audio data streams can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.
[0059] Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
[0060] Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
[0061] Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
[0062] Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
[0063] In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
[0064] A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.
[0065] A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
[0066] A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
[0067] A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
[0068] A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
[0069] In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus.
[0070] The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
[0071] The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
[0072] While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Definitions and Abbreviations
AAC Advanced Audio Coding
ATSC Advanced Television Systems Committee
AU Audio Access Unit
[0073] DASH Dynamic Adaptive Streaming over HTTP
DVB Digital Video Broadcasting
IPF Instantaneous Playout Frame
MPD Media Presentation Description
MPEG Moving Picture Experts Group
[0074] MMT MPEG media transport
NTSC National Television Systems Committee
PAL Phase-Alternating-Line-Verfahren
REFERENCES
[0075] [1] “Delivery/Sync/FEC-Evaluation Criteria Report”, ROUTE/DASH [0076] [2] ISO/IEC 23008-3, “Information technology—High efficiency coding and media delivery in heterogeneous environments—Part 3: 3D audio” [0077] [3] ISO/IEC 23009-1, “Information technology—Dynamic adaptive streaming over HTTP (DASH)—Part 1: Media presentation description and segment formats” [0078] [4] ISO/IEC 23008-1, “Information technology—High efficiency coding and media delivery in heterogeneous environments—Part 1: MPEG media transport (MMT)”