Concept for audio encoding and decoding for audio channels and audio objects
11227616 · 2022-01-18
Assignee
Inventors
- Alexander Adami (Gundelsheim, DE)
- Christian Borss (Erlangen, DE)
- Sascha Dick (Nuremberg, DE)
- Christian Ertel (Eckental, DE)
- Simone Neukam (Kalchreuth, DE)
- Juergen Herre (Erlangen, DE)
- Johannes Hilpert (Nuremberg, DE)
- Andreas Hoelzer (Erlangen, DE)
- Michael Kratschmer (Fuerth, DE)
- Fabian Kuech (Erlangen, DE)
- Achim Kuntz (Hemhofen, DE)
- Adrian Murtaza (Craiova, RO)
- Jan Plogsties (Fuerth, DE)
- Andreas SILZLE (Buckenhof, DE)
- Hanne Stenzel (Fuerth, DE)
Cpc classification
H04S2400/03
ELECTRICITY
G10L19/20
PHYSICS
H04S2400/11
ELECTRICITY
G10L19/008
PHYSICS
G10L19/22
PHYSICS
H04S3/008
ELECTRICITY
International classification
G10L19/20
PHYSICS
H04S3/00
ELECTRICITY
G10L19/22
PHYSICS
G10L19/008
PHYSICS
Abstract
Audio encoder for encoding audio input data to obtain audio output data includes an input interface for receiving a plurality of audio channels, a plurality of audio objects and metadata related to one or more of the plurality of audio objects; a mixer for mixing the plurality of objects and the plurality of channels to obtain a plurality of pre-mixed channels, each pre-mixed channel including audio data of a channel and audio data of at least one object; a core encoder for core encoding core encoder input data; and a metadata compressor for compressing the metadata related to the one or more of the plurality of audio objects, wherein the audio encoder is configured to operate in at least one mode of the group of two modes.
Claims
1. An audio decoder for decoding encoded audio data, comprising: an input interface configured for receiving the encoded audio data, the encoded audio data comprising either a plurality of encoded audio channels and a plurality of encoded audio objects and compressed metadata related to the plurality of encoded audio objects, or a plurality of encoded audio channels without any encoded audio objects; a mode controller configured for analyzing the encoded audio data to determine whether the encoded audio data comprise either a plurality of encoded audio channels and a plurality of encoded audio objects and compressed metadata related to the plurality of encoded audio objects, or a plurality of encoded audio channels without any encoded audio objects; a core decoder configured for either decoding the plurality of encoded audio channels received by the input interface to obtain decoded audio channels and decoding the plurality of encoded audio objects received by the input interface to obtain decoded audio objects, when the encoded audio data comprises the plurality of encoded audio channels and the plurality of encoded audio objects and the compressed metadata related to the plurality of encoded audio objects, or decoding the plurality of encoded audio channels received by the input interface to obtain decoded audio channels, when the encoded audio data comprises the plurality of encoded audio channels without any encoded audio objects; a metadata decompressor configured for decompressing the compressed metadata to obtain decompressed metadata, when the encoded audio data comprises the plurality of encoded audio channels and the plurality of encoded audio objects and the compressed metadata related to the plurality of encoded audio objects; an object processor configured for processing the decoded audio objects using the decompressed metadata and the decoded audio channels to acquire a number of output audio channels comprising audio data from the decoded audio objects and the decoded audio channels, when the encoded audio data comprises the plurality of encoded audio channels and the plurality of encoded audio objects and the compressed metadata related to the plurality of encoded audio objects; a post processor configured for post processing the number of output audio channels to obtain an output format, wherein the mode controller is configured for controlling the audio decoder to either bypass the object processor and to feed the decoded audio channels as the output audio channels into the post processor, when the encoded audio data comprises the plurality of encoded audio channels without any encoded audio objects, or to feed the decoded audio objects and the decoded audio channels into the object processor, when the encoded audio data comprise the plurality of encoded audio channels and the plurality of encoded audio objects and the compressed metadata related to the plurality of encoded audio objects.
2. The audio decoder of claim 1, wherein the post processor is configured for converting the number of output audio channels to a binaural representation as the output format or to a reproduction format as the output format, the reproduction format comprising a smaller number of reproduction audio channels than the number of output audio channels, and wherein the audio decoder is configured for controlling the post processor in accordance with a control input derived from an user interface or extracted from the encoded audio data received by the input interface.
3. The audio decoder of claim 1, in which the object processor comprises: an object renderer configured for rendering the decoded audio objects using the decompressed metadata to obtain rendered audio objects; and a mixer configured for mixing the rendered audio objects and the decoded audio channels to acquire the number of output audio channels.
4. The audio decoder of claim 1, wherein the plurality of encoded objects comprises one or more core encoded transport channels and associated parametric side information, wherein the core decoder is configured to decode the one or more core encoded transport channels to obtain the decoded audio objects comprising one or more core decoded transport channels and the associated parametric side information, wherein the object processor comprises a spatial audio object coding decoder configured for decoding the one or more core decoded transport channels and the associated parametric side information to obtain spatial audio object decoded audio objects, wherein the spatial audio object coding decoder is configured for rendering the spatial audio object decoded audio objects in accordance with rendering information related to a placement of the spatial audio object decoded audio objects to obtain rendered audio objects, and wherein the object processor is configured for mixing the rendered audio objects and the decoded audio channels to acquire the number of output audio channels.
5. The audio decoder of claim 1, wherein the plurality of encoded audio objects comprises one or more core encoded transport channels and associated parametric side information representing the plurality of encoded audio objects, wherein the core decoder is configured to decode the one or more core encoded transport channels to obtain the decoded audio objects comprising one or more core decoded transport channels and the associated parametric side information, wherein the spatial audio object coding decoder is configured for transcoding the associated parametric side information and the decompressed metadata into transcoded parametric side information usable for directly rendering the output format, and wherein the post processor is configured for calculating output format audio channels of the output format using the one or more core decoded transport channels and the transcoded parametric side information.
6. The audio decoder of claim 1, wherein the plurality of encoded audio objects comprises one or more core encoded transport channels and associated parametric data, wherein the core decoder is configured to decode the one or more core encoded transport channels to obtain one or more core decoded transport channels, wherein the object processor comprises a spatial audio object coding decoder configured for decoding the core decoded one or more transport channels outputted by the core decoder and the associated parametric data and the decompressed metadata to acquire a plurality of spatial audio object rendered audio objects, wherein the object processor comprises an object renderer configured for rendering the decoded audio objects outputted by the core decoder to obtain rendered decoded audio objects; wherein the object processor comprises a mixer for mixing the rendered decoded audio objects, the spatial audio object rendered audio objects, and the decoded audio channels to obtain mixer output audio channels, wherein the audio decoder further comprises an output interface configured for outputting the mixer output audio channels to loudspeakers, wherein the post processor furthermore comprises: a binaural renderer configured for rendering the mixer output audio channels into two binaural channels as the output format using head related transfer functions or binaural impulse responses, or a format converter configured for converting the mixer output audio channels into an output channel representation, as the output format, the output channel representation comprising a lower number of audio channels than the mixer output audio channels using information on a reproduction layout.
7. The audio decoder of claim 6, wherein certain elements comprising the binaural renderer, the format converter, the mixer, the spatial audio object coding decoder, the core decoder, and the object renderer operate in a quadrature mirror filterbank domain, and wherein data in the quadrature mirror filterbank domain are transmitted from one of the certain elements to another one of the certain elements without any synthesis filterbank and subsequent analysis filterbank processing.
8. The audio decoder of claim 1, wherein the plurality of encoded audio channels are encoded as audio channel pair elements, audio single channel elements, audio low frequency elements or audio quad channel elements, wherein an audio quad channel element comprises four encoded audio channels of the plurality of encode audio channels, or wherein the plurality of encoded audio objects are encoded as audio channel pair elements, audio single channel elements, audio low frequency elements or audio quad channel elements, wherein an audio quad channel element comprises four encoded audio objects of the plurality of encoded objects, and wherein the core decoder is configured for decoding the audio channel pair elements, the audio single channel elements, the audio low frequency elements or the audio quad channel elements in accordance with side information comprised in the encoded audio data indicating the audio channel pair element, the audio single channel element, the audio low frequency element or the audio quad channel element.
9. The audio decoder of claim 1, wherein the core decoder is configured for applying a full-band decoding operation using a noise filling operation without a spectral band replication operation.
10. The audio decoder of claim 1, wherein the post processor is configured for downmixing the number of output audio channels to an intermediate format, the intermediate format comprising intermediate audio channels, a number of the intermediate audio channels being three or more and lower than the number of output audio channels, and for binaurally rendering the intermediate audio channels into a two-channel binaural output signal as the output format.
11. The audio decoder of claim 1, in which the post processor comprises: a controlled downmixer configured for applying a specific downmix matrix to the number of output audio channels; and a controller configured for determining the specific downmix matrix using information on a channel configuration of the number of output audio channels and information on an intended reproduction layout.
12. The audio decoder of claim 1, in which the core decoder is configured for performing a transform decoding and a spectral band replication decoding for a single channel element included in the encoded audio data, the single channel element comprising an encoded audio channel of the plurality of encoded audio channels or comprising an encoded audio object of the plurality of encoded audio objects, and performing the transform decoding, a parametric stereo decoding and the spectral band replication decoding for a channel pair element included in the encoded audio data, the channel pair element comprising a pair of encoded audio channels of the plurality of encoded audio channels or comprising a pair of encoded audio objects of the plurality of encoded audio objects, and performing the transform decoding, the parametric stereo decoding and the spectral band replication decoding for a quad channel elements included in the encoded audio data, the quad channel element comprising four encoded audio channels of the plurality of encoded audio channels or comprising four encoded audio objects of the plurality of encoded audio objects.
13. A method of decoding encoded audio data, comprising: receiving the encoded audio data, the encoded audio data comprising either a plurality of encoded audio channels and a plurality of encoded audio objects and compressed metadata related to the plurality of audio objects, or a plurality of encoded audio channels without any encoded audio objects; analyzing the encoded audio data to determine whether the encoded audio data comprise either a plurality of encoded audio channels and a plurality of encoded audio objects and compressed metadata related to the plurality of encoded audio objects, or a plurality of encoded audio channels without any encoded audio objects core decoding either the encoded audio data comprising the plurality of encoded audio channels and the plurality of encoded audio objects to obtain decoded audio channels and decoded audio objects when the encoded audio data comprises the plurality of encoded audio channels and the plurality of encoded audio objects and the compressed metadata related to the plurality of encoded audio objects, or the plurality of encoded audio channels to obtain decoded audio channels, when the encoded audio data comprises the plurality of encoded audio channels without any encoded audio objects; decompressing the compressed metadata to obtain decompressed metadata, when the encoded audio data comprises the plurality of encoded audio channels and the plurality of encoded audio objects and the compressed metadata related to the plurality of encoded audio objects; processing the decoded audio objects using the decompressed metadata and the decoded audio channels to acquire a number of output audio channels comprising audio data from the decoded audio objects and the decoded audio channels, when the encoded audio data comprises the plurality of encoded audio channels and the plurality of encoded audio objects and the compressed metadata related to the plurality of encoded audio objects; and post processing the number of output audio channels to obtain an output format, where the method of decoding the encoded audio data is controlled in response to the analyzing the encoded audio data so that either the processing the decoded audio objects is bypassed and the decoded audio channels obtained by the core decoding are fed, as the output audio channels, into the converting, when the encoded audio data comprises the plurality of encoded audio channels without any encoded audio objects, or the decoded audio objects and the decoded audio channels obtained by the core decoding are fed into the processing the decoded audio objects, when the encoded audio data comprise the plurality of encoded audio channels and the plurality of encoded audio objects and the compressed metadata related to the plurality of encoded audio objects.
14. A non-transitory digital storage medium having a computer program stored thereon to perform the method of claim 13.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION OF THE INVENTION
(13)
(14) Furthermore, the encoder comprises a core encoder 300 for core encoding core encoder input data, a metadata compressor 400 for compressing the metadata related to the one or more of the plurality of audio objects. Furthermore, the encoder can comprise a mode controller 600 for controlling the mixer, the core encoder and/or an output interface 500 in one of several operation modes, wherein in the first mode, the core encoder is configured to encode the plurality of audio channels and the plurality of audio objects received by the input interface 100 without any interaction by the mixer, i.e., without any mixing by the mixer 200. In a second mode, however, in which the mixer 200 was active, the core encoder encodes the plurality of mixed channels, i.e., the output generated by block 200. In this latter case, it is advantageous to not encode any object data anymore. Instead, the metadata indicating positions of the audio objects are already used by the mixer 200 to render the objects onto the channels as indicated by the metadata. In other words, the mixer 200 uses the metadata related to the plurality of audio objects to pre-render the audio objects and then the pre-rendered audio objects are mixed with the channels to obtain mixed channels at the output of the mixer. In this embodiment, any objects may not necessarily be transmitted and this also applies for compressed metadata as output by block 400. However, if not all objects input into the interface 100 are mixed but only a certain amount of objects is mixed, then only the remaining non-mixed objects and the associated metadata nevertheless are transmitted to the core encoder 300 or the metadata compressor 400, respectively.
(15)
(16) Furthermore, as illustrated in
(17)
(18) The
(19) In addition to the first and the second modes as discussed in the context of
(20) Finally, the SAOC encoder 800 can encode, when the encoder is configured in the fourth mode, the channels plus pre-rendered objects as generated by the pre-renderer/mixer. Thus, in the fourth mode the lowest bit rate applications will provide good quality due to the fact that the channels and objects have completely been transformed into individual SAOC transport channels and associated side information as indicated in
(21)
(22) The decoder comprises a metadata decompressor 1400, a core decoder 1300, an object processor 1200, a mode controller 1600 and a postprocessor 1700.
(23) Specifically, the audio decoder is configured for decoding encoded audio data and the input interface is configured for receiving the encoded audio data, the encoded audio data comprising a plurality of encoded channels and the plurality of encoded objects and compressed metadata related to the plurality of objects in a certain mode.
(24) Furthermore, the core decoder 1300 is configured for decoding the plurality of encoded channels and the plurality of encoded objects and, additionally, the metadata decompressor is configured for decompressing the compressed metadata.
(25) Furthermore, the object processor 1200 is configured for processing the plurality of decoded objects as generated by the core decoder 1300 using the decompressed metadata to obtain a predetermined number of output channels comprising object data and the decoded channels. These output channels as indicated at 1205 are then input into a postprocessor 1700. The postprocessor 1700 is configured for converting the number of output channels 1205 into a certain output format which can be a binaural output format or a loudspeaker output format such as a 5.1, 7.1, etc., output format.
(26) Advantageously, the decoder comprises a mode controller 1600 which is configured for analyzing the encoded data to detect a mode indication. Therefore, the mode controller 1600 is connected to the input interface 1100 in
(27) Advantageously, the indication whether mode 1 or mode 2 is to be applied is included in the encoded audio data and then the mode controller 1600 analyses the encoded data to detect a mode indication. Mode 1 is used when the mode indication indicates that the encoded audio data comprises encoded channels and encoded objects and mode 2 is applied when the mode indication indicates that the encoded audio data does not contain any audio objects, i.e., only contain pre-rendered channels obtained by mode 2 of the
(28)
(29) Furthermore, the postprocessor 1700 can be implemented as a binaural renderer 1710 or a format converter 1720. Alternatively, a direct output of data 1205 of
(30) In a embodiment of the present invention, the object processor 1200 comprises the SAOC decoder 1800 and the SAOC decoder is configured for decoding one or more transport channels output by the core decoder and associated parametric data and using decompressed metadata to obtain the plurality of rendered audio objects. To this end, the OAM output is connected to box 1800.
(31) Furthermore, the object processor 1200 is configured to render decoded objects output by the core decoder which are not encoded in SAOC transport channels but which are individually encoded in typically single channeled elements as indicated by the object renderer 1210. Furthermore, the decoder comprises an output interface corresponding to the output 1730 for outputting an output of the mixer to the loudspeakers.
(32) In a further embodiment, the object processor 1200 comprises a spatial audio object coding decoder 1800 for decoding one or more transport channels and associated parametric side information representing encoded audio objects or encoded audio channels, wherein the spatial audio object coding decoder is configured to transcode the associated parametric information and the decompressed metadata into transcoded parametric side information usable for directly rendering the output format, as for example defined in an earlier version of SAOC. The postprocessor 1700 is configured for calculating audio channels of the output format using the decoded transport channels and the transcoded parametric side information. The processing performed by the post processor can be similar to the MPEG Surround processing or can be any other processing such as BCC processing or so.
(33) In a further embodiment, the object processor 1200 comprises a spatial audio object coding decoder 1800 configured to directly upmix and render channel signals for the output format using the decoded (by the core decoder) transport channels and the parametric side information
(34) Furthermore, and importantly, the object processor 1200 of
(35) The mixer 1220 is connected to the output interface 1730, the binaural renderer 1710 and the format converter 1720. The binaural renderer 1710 is configured for rendering the output channels into two binaural channels using head related transfer functions or binaural room impulse responses (BRIR). The format converter 1720 is configured for converting the output channels into an output format having a lower number of channels than the output channels 1205 of the mixer and the format converter 1720 may use information on the reproduction layout such as 5.1 speakers or so.
(36) The
(37) Furthermore, a vector base amplitude panning (VBAP) stage 1810 is configured which receives, from the SAOC decoder, information on the reproduction layout and which outputs a rendering matrix to the SAOC decoder so that the SAOC decoder can, in the end, provide rendered channels without any further operation of the mixer in the high channel format of 1205, i.e., 32 loudspeakers.
(38) the VBAP block advantageously receives the decoded OAM data to derive the rendering matrices. More general, it may use geometric information not only of the reproduction layout but also of the positions where the input signals should be rendered to on the reproduction layout. This geometric input data can be OAM data for objects or channel position information for channels that have been transmitted using SAOC.
(39) However, if only a specific output interface may be used then the VBAP state 1810 can already provide the rendering matrix that may be used for the e.g., 5.1 output. The SAOC decoder 1800 then performs a direct rendering from the SAOC transport channels, the associated parametric data and decompressed metadata, a direct rendering into the output format that may be used without any interaction of the mixer 1220. However, when a certain mix between modes is applied, i.e., where several channels are SAOC encoded but not all channels are SAOC encoded or where several objects are SAOC encoded but not all objects are SAOC encoded or when only a certain amount of pre-rendered objects with channels are SAOC decoded and remaining channels are not SAOC processed then the mixer will put together the data from the individual input portions, i.e., directly from the core decoder 1300, from the object renderer 1210 and from the SAOC decoder 1800.
(40) Subsequently,
(41) In accordance with the first coding mode, the mixer 200 in the
(42) In the second mode, the mixer 200 in
(43) Then, in the third coding mode, the SAOC encoder of
(44) In a fourth coding mode as illustrated in
(45) Furthermore, a fifth coding mode exists which can by any mix of modes 1 to 4. In particular, a mix coding mode will exist when the mixer 1220 in
(46) Each input portion of the mixer 1220 can then, exemplarily, have at least a potential for receiving the number of channels such as 32 as indicated at 1205. Thus, basically, the mixer could receive 32 channels from the USAC decoder and, additionally, 32 pre-rendered/mixed channels from the USAC decoder and, additionally, 32 “channels” from the object renderer and, additionally, 32 “channels” from the SAOC decoder, where each “channel” between blocks 1210 and 1218 on the one hand and block 1220 on the other hand has a contribution of the corresponding objects in a corresponding loudspeaker channel and then the mixer 1220 mixes, i.e., adds up the individual contributions for each loudspeaker channel.
(47) In a embodiment of the present invention, the encoding/decoding system is based on an MPEG-D USAC codec for coding of channel and object signals. To increase the efficiency for coding a large amount of objects, MPEG SAOC technology has been adapted. Three types of renderers perform the task of rendering objects to channels, rendering channels to headphones or rendering channels to a different loudspeaker setup. When object signals are explicitly transmitted or parametrically encoded using SAOC, the corresponding object metadata information is compressed and multiplexed into the encoded output data.
(48) In an embodiment, the pre-renderer/mixer 200 is used to convert a channel plus object input scene into a channel scene before encoding. Functionally, it is identical to the object renderer/mixer combination on the decoder side as illustrated in
(49) As a core/encoder/decoder for loudspeaker channel signals, discrete object signals, object downmix signals and pre-rendered signals, a USAC technology is advantageous. It handles the coding of the multitude of signals by creating channel and object mapping information (the geometric and semantic information of the input channel and object assignment). This mapping information describes how input channels and objects are mapped to USAC channel elements as illustrated in
(50) The coding of objects is possible in different ways, depending on the rate/distortion requirements and the interactivity requirements for the renderer. The following object coding variants are possible: Prerendered objects: Object signals are prerendered and mixed to the 22.2 channel signals before encoding. The subsequent coding chain sees 22.2 channel signals. Discrete object waveforms: Objects are supplied as monophonic waveforms to the encoder. The encoder uses single channel elements SCEs to transmit the objects in addition to the channel signals. The decoded objects are rendered and mixed at the receiver side. Compressed object metadata information is transmitted to the receiver/renderer alongside. Parametric object waveforms: Object properties and their relation to each other are described by means of SAOC parameters. The down-mix of the object signals is coded with USAC. The parametric information is transmitted alongside. The number of downmix channels is chosen depending on the number of objects and the overall data rate. Compressed object metadata information is transmitted to the SAOC renderer.
(51) The SAOC encoder and decoder for object signals are based on MPEG SAOC technology. The system is capable of recreating, modifying and rendering a number of audio objects based on a smaller number of transmitted channels and additional parametric data (OLDs, IOCs (Inter Object Coherence), DMGs (Down Mix Gains)). The additional parametric data exhibits a significantly lower data rate than that may be used for transmitting all objects individually, making the coding very efficient.
(52) The SAOC encoder takes as input the object/channel signals as monophonic waveforms and outputs the parametric information (which is packed into the 3D-Audio bitstream) and the SAOC transport channels (which are encoded using single channel elements and transmitted).
(53) The SAOC decoder reconstructs the object/channel signals from the decoded SAOC transport channels and parametric information, and generates the output audio scene based on the reproduction layout, the decompressed object metadata information and optionally on the user interaction information.
(54) For each object, the associated metadata that specifies the geometrical position and volume of the object in 3D space is efficiently coded by quantization of the object properties in time and space. The compressed object metadata cOAM is transmitted to the receiver as side information. The volume of the object may comprise information on a spatial extent and/or information of the signal level of the audio signal of this audio object.
(55) The object renderer utilizes the compressed object metadata to generate object waveforms according to the given reproduction format. Each object is rendered to certain output channels according to its metadata. The output of this block results from the sum of the partial results.
(56) If both channel based content as well as discrete/parametric objects are decoded, the channel based waveforms and the rendered object waveforms are mixed before outputting the resulting waveforms (or before feeding them to a postprocessor module like the binaural renderer or the loudspeaker renderer module).
(57) The binaural renderer module produces a binaural downmix of the multichannel audio material, such that each input channel is represented by a virtual sound source. The processing is conducted frame-wise in QMF (Quadrature Mirror Filterbank) domain.
(58) The binauralization is based on measured binaural room impulse responses
(59)
(60) As illustrated in the context of
(61)
(62) Advantageously, the “shortcut” as illustrated by control line 1727 comprises controlling the decoder 1300 to decode to a lower number of channels, i.e., skipping the complete OTT processing block in the decoder or a format converting to a lower number of channels and, as illustrated in
(63) In a further embodiment, an efficient interfacing between processing blocks may be used. Particularly in
(64) Subsequently, reference is made to
(65) Furthermore, it is advantageous to perform an enhanced noise filling procedure to enable uncompromised full-band (18 kHz) coding at 1200 kbps.
(66) The encoder has been operated in a ‘constant rate with bit-reservoir’ fashion, using a maximum of 6144 bits per channel as rate buffer for the dynamic data.
(67) All additional payloads like SAOC data or object metadata have been passed through extension elements and have been considered in the encoder's rate control.
(68) In order to take advantage of the SAOC functionalities also for 3D audio content, the following extensions to MPEG SAOC have been implemented: Downmix to arbitrary number of SAOC transport channels. Enhanced rendering to output configurations with high number of loudspeakers (up to 22.2).
(69) The binaural renderer module produces a binaural downmix of the multichannel audio material, such that each input channel (excluding the LFE channels) is represented by a virtual sound source. The processing is conducted frame-wise in QMF domain.
(70) The binauralization is based on measured binaural room impulse responses. The direct sound and early reflections are imprinted to the audio material via a convolutional approach in a pseudo-FFT domain using a fast convolution on-top of the QMF domain.
(71) Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus.
(72) Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a non-transitory storage medium such as a digital storage medium, for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
(73) Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
(74) Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may, for example, be stored on a machine readable carrier.
(75) Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
(76) In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
(77) A further embodiment of the inventive method is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.
(78) A further embodiment of the invention method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example, via the internet.
(79) A further embodiment comprises a processing means, for example, a computer or a programmable logic device, configured to, or adapted to, perform one of the methods described herein.
(80) A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
(81) A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
(82) In some embodiments, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus.
(83) While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.