Audio encoding and decoding using presentation transform parameters
11798567 · 2023-10-24
Assignee
- Dolby Laboratories Licensing Corporation (San Francisco, CA)
- Dolby International Ab (Amsterdam Zuidoost, NL)
Inventors
- Dirk Jeroen Breebaart (Ultimo, AU)
- David Matthew Cooper (Carlton, AU)
- Leif Jonas Samuelsson (Sundbyberg, SE)
- Jeroen Koppens (Nederweert, NL)
- Rhonda J. Wilson (San Francisco, CA)
- Heiko Purnhagen (Sundbyberg, SE)
- Alexander Stahlmann (Bubenreuth, DE)
Cpc classification
H04S7/305
ELECTRICITY
H04S2400/03
ELECTRICITY
G10L19/008
PHYSICS
H04S2400/07
ELECTRICITY
International classification
G10L19/008
PHYSICS
Abstract
A method for encoding an input audio stream including the steps of obtaining a first playback stream presentation of the input audio stream intended for reproduction on a first audio reproduction system, obtaining a second playback stream presentation of the input audio stream intended for reproduction on a second audio reproduction system, determining a set of transform parameters suitable for transforming an intermediate playback stream presentation to an approximation of the second playback stream presentation, wherein the transform parameters are determined by minimization of a measure of a difference between the approximation of the second playback stream presentation and the second playback stream presentation, and encoding the first playback stream presentation and the set of transform parameters for transmission to a decoder.
Claims
1. A method of decoding playback stream presentations from a data stream, the method comprising: receiving and decoding a first rendered playback stream presentation, said first rendered playback stream presentation being a set of M1 signals intended for reproduction on a first audio reproduction system; receiving and decoding a set of transform parameters suitable for transforming an intermediate playback stream presentation into an approximation of a second rendered playback stream presentation, said second rendered playback stream presentation being a set of M2 signals intended for reproduction on a second audio reproduction system, wherein the intermediate playback stream presentation is one of the first rendered playback stream presentation, a down-mix of the first rendered playback stream presentation, and an up-mix of the first rendered playback stream presentation, and wherein the approximation of the second rendered playback stream presentation is an anechoic binaural presentation; receiving and decoding one or more additional sets of transform parameters suitable for transforming the intermediate playback stream presentation into one or more acoustic environment simulation process input signals; applying said transform parameters to said intermediate playback stream presentation to produce said approximation of the second rendered playback stream presentation, applying the one or more additional sets of transform parameters to the intermediate playback stream presentation to generate the one or more acoustic environment simulation process input signals; applying the one or more acoustic environment simulation process input signals to one or more acoustic environment simulation processes to produce one or more simulated acoustic environment signals; and combining the one or more simulated acoustic environment signals with the approximation of the second rendered playback stream presentation.
2. The method of claim 1, wherein the one or more simulated acoustic environment signals comprise one or more of: early reflection signals and late reverberation signals.
3. The method of claim 1, wherein the acoustic environment simulation processes comprises one or more of: an early reflection simulation process and a late reverberation simulation process.
4. The method of claim 3, wherein the early reflection simulation process comprises processing one or more of the acoustic environment simulation process input signals through a delay element.
5. The method of claim 3, wherein the late reverberation simulation process comprises processing one or more of the acoustic environment simulation process input signals through a feedback delay network.
6. A device for decoding playback stream presentations from a data stream, the device having one or more audio components, the device comprising: one or more processors; and a memory storing instructions that, when executed, cause the one or more processors to perform operations comprising: receiving and decoding a first rendered playback stream presentation, said first rendered playback stream presentation being a set of M1 signals intended for reproduction on a first audio reproduction system; receiving and decoding a set of transform parameters suitable for transforming an intermediate playback stream presentation into an approximation of a second rendered playback stream presentation, said second rendered playback stream presentation being a set of M2 signals intended for reproduction on a second audio reproduction system, wherein the intermediate playback stream presentation is one of the first rendered playback stream presentation, a down-mix of the first rendered playback stream presentation, and an up-mix of the first rendered playback stream presentation, and wherein the approximation of the second rendered playback stream presentation is an anechoic binaural presentation; receiving and decoding one or more additional sets of transform parameters suitable for transforming the intermediate playback stream presentation into one or more acoustic environment simulation process input signals; applying said transform parameters to said intermediate playback stream presentation to produce said approximation of the second rendered playback stream presentation, applying the one or more additional sets of transform parameters to the intermediate playback stream presentation to generate the one or more acoustic environment simulation process input signals; applying the one or more acoustic environment simulation process input signals to one or more acoustic environment simulation processes to produce one or more simulated acoustic environment signals; and combining the one or more simulated acoustic environment signals with the approximation of the second rendered playback stream presentation.
7. The device of claim 6, wherein the one or more simulated acoustic environment signals comprise one or more of: early reflection signals and late reverberation signals.
8. The device of claim 6, wherein the acoustic environment simulation processes comprises one or more of: an early reflection simulation process and a late reverberation simulation process.
9. The device of claim 8, wherein the early reflection simulation process comprises processing one or more of the acoustic environment simulation process input signals through a delay element.
10. The device of claim 8, wherein the late reverberation simulation process comprises processing one or more of the acoustic environment simulation process input signals through a feedback delay network.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION
(12) The embodiments provide a method for a low bit rate, low complexity representation of channel and/or object based audio that is suitable for loudspeaker and headphone (binaural) playback. This is achieved by (1) creating and encoding a rendering intended for a specific playback reproduction system (for example, but not limited to loudspeakers), and (2) adding additional metadata that allow transformation of that specific rendering into a modified rendering suitable for another reproduction system (for example headphones). The specific rendering may be referred to as a first audio playback stream presentation, while the modified rendering may be referred to as a second audio playback stream presentation. The first presentation may have a set of M1 channels, while the second presentation may have a set of M2 channels. The number of channels may be equal (M1=M2) or different. The metadata may be in the form of a set of parameters, possibly time and frequency varying.
(13) In one implementation, the transformation metadata provides a means for transforming a stereo loudspeaker rendering into a binaural headphone rendering, with the possibility to include early reflections and late reverberation. Furthermore, for object-based audio content, the virtual acoustic attributes, in particular the (relative) level of late reverberation and/or the level, spectral and temporal characteristics of one or more early reflections can be controlled on a per-object basis.
(14) The embodiments are directed to the elimination of artifacts and/or improvement of the reproduction quality and maintaining artistic intent by metadata that guides reproduction on one or more reproduction systems. In particular, the embodiments include metadata with an object, channel or hybrid signal representation that improves the quality of reproduction when the reproduction system layout does not correspond to the intended layout envisioned during content creation. As such, the application and/or effect as a result of the metadata will depend on the intended and actual reproduction systems.
(15) Binaural Pre-Rendered Content Reproduced Over Loudspeakers
(16) As described in the background section, reproduction of binaural pre-rendered content over loudspeakers can result in an unnatural timbre due to the fact that spectral cues inherently present in HRIRs or BRIRs are applied twice; once during pre-rendering, and another time during playback in an acoustic environment. Furthermore, such reproduction of binaural pre-rendered content will inherently have azimuthal localization cues applied twice as well, causing incorrect spatial imaging and localization errors.
(17)
(18) The spectral artifacts resulting from applying an acoustic pathway from speakers to eardrums twice can, at least in part, be compensated for by applying a frequency-dependent gain or attenuation during decoding or reproduction. These gain or attenuation parameters can subsequently be encoded and included with the content. For headphone reproduction, these parameters can be discarded, while for reproduction on loudspeakers, the encoded gains are applied to the signals prior to reproduction.
(19) One form of suitable consequential processing flow 30 is shown in
(20) Implementation Example
(21) In one implementation, to compute the gain metadata 31, the input signals x.sub.i[n] with discrete-time index n and input index i are analyzed in time and frequency tiles. Each of the input signals x.sub.i[n] can be broken up into time frames and each frame can, in turn, be divided into frequency bands to construct time/frequency tiles. The frequency bands can be achieved, for example, by means of a filter bank such as a quadrature mirror filter (QMF) bank, a discrete Fourier transform (DFT), a discrete cosine transform (DCT), or any other means to split input signals into a variety of frequency bands. The result of such transform is that an input signal x.sub.i[n] for input with index i and discrete-time index n is represented by sub-band signals x.sub.i[k, b] for time slot (or frame) k and subband b. The short-term energy in time/frequency tile (K,B) is given by:
(22)
with B, K sets of frequency (b) and time (k) indices corresponding to a desired time/frequency tile.
(23) The discrete-time domain representation of the binaural signals y.sub.1[n], y.sub.r[n], for the left and right ear, respectively, is given by:
(24)
with h.sub.l,i, h.sub.r,i, the HRIR or BRIR corresponding to the input index i, for the left and right ears, respectively. In other words, the binaural signal pair y.sub.l[n], y.sub.r[n] can be created by a combination of convolution and summation across inputs i. Subsequently, these binaural signals can be converted into time/frequency tiles using the same process as applied to the signals x.sub.i[k, b]. For these frequency-domain binaural signals, the short-term energy in time/frequency tile (K,B) can thus be calculated as:
(25)
(26) The gain metadata w(K, B) can now be constructed on the basis of energy preservation in each time/frequency tile summed across input objects i in the numerator and across binaural signals j in the denominator:
(27)
(28) The metadata w(K, B) can subsequently be quantized, encoded and included in an audio codec bit stream. The decoder will then apply metadata w(K, B) to frame K and band B of both signals y.sub.l and y.sub.r (the input presentation) to produce an output presentation. Such use of a common w(K, B) applied to both y.sub.l and y.sub.r ensures that the stereo balance of the input presentation is maintained.
(29) Besides the method described above, in which the binaural signals y.sub.l[n], y.sub.r[n] are created by means of time-domain convolution, the binaural rendering process may also be applied in the frequency domain. In other words, instead of first computing the binaural signals y.sub.l[n], y.sub.r[n] in the time domain, one can instead convert the input signals x.sub.i[n] to the frequency-domain representation, and apply the HRIR convolution process in the frequency domain to generate the frequency-domain representation of the binaural signals y.sub.j[k, b], for example by frequency-domain fast convolution methods. In this approach, the frequency-domain representation of the binaural signals y.sub.j[k, b] is obtained without requiring these signals to be generated in the time domain, and does not require a filterbank or transform to be applied on the time-domain binaural signals.
(30) Stereo Content Reproduced Over Headphones, Including an Anechoic Binaural Rendering
(31) In this implementation, a stereo signal intended for loudspeaker playback is encoded, with additional data to enhance the playback of that loudspeaker signal on headphones. Given a set of input objects or channels x.sub.i[n], a set of loudspeaker signals z.sub.s[n] is typically generated by means of amplitude panning gains g.sub.i,s that represents the gain of object i to speaker s:
(32)
(33) For channel-based content, the amplitude panning gains g.sub.i,s are typically constant, while for object-based content, in which the intended position of an object is provided by time-varying object metadata, the gains will consequently be time variant.
(34) Given the signals z.sub.s[n] to be encoded and decoded, it is desirable to find a set of coefficients w such that if these coefficients are applied to signals z.sub.s [n], the resulting modified signals ŷ.sub.1, ŷ.sub.r constructed as:
(35)
(36)
closely match a binaural presentation of the original input signals x.sub.i[n] according to:
(37)
(38) The coefficients w can be found by minimizing the L2 norm E between desired and actual binaural presentation:
E=|y.sub.1−ŷ.sub.1|.sup.2+|y.sub.r−ŷ.sub.r|.sup.2
w=arg min(E)
(39) The solution to minimize the error E can be obtained by closed-form solutions, gradient descent methods, or any other suitable iterative method to minimize an error function. As one example of such solution, one can write the various rendering steps in matrix notation:
Y=XH
Z=XG
Ŷ=XGW=ZW
This matrix notation is based on single-channel frame containing N samples being represented as one column:
(40)
and matrices as combination of multiple channels i={1, . . . , I}, each being represented by one column vector in the matrix:
X=[{right arrow over (x)}.sub.1 . . . {right arrow over (x)}.sub.I]
(41) The solution for W that minimizes E is then given by:
W=(G*X*XG+ϵI).sup.−1G*X*XH
with (*) the complex conjugate transpose operator, I the identity matrix, and c a regularization constant. This solution differs from the gain-based method in that the signal Ŷ is generated by a matrix rather than a scalar W applied to signal Z including the option of having cross-terms (e.g. for example the second signal of Ŷ being (partly) reconstructed from the first signal in Z).
(42) Ideally, the coefficients w are determined for each time/frequency tile to minimize the error E in each time/frequency tile.
(43) In the sections above, a minimum mean-square error criterion (L2 norm) is employed to determine the matrix coefficients. Without loss of generality, other well-known criteria or methods to compute the matrix coefficients can be used similarly to replace or augment the minimum mean-square error principle. For example, the matrix coefficients can be computed using higher-order error terms, or by minimization of an L1 norm (e.g., least absolute deviation criterion). Furthermore various methods can be employed including non-negative factorization or optimization techniques, non-parametric estimators, maximum-likelihood estimators, and alike. Additionally, the matrix coefficients may be computed using iterative or gradient-descent processes, interpolation methods, heuristic methods, dynamic programming, machine learning, fuzzy optimization, simulated annealing, or closed-form solutions, and analysis-by-synthesis techniques may be used. Last but not least, the matrix coefficient estimation may be constrained in various ways, for example by limiting the range of values, regularization terms, superposition of energy-preservation requirements and alike.
(44) In practical situations, the HRIR or BRIR h.sub.l,i, h.sub.r,i will involve frequency-dependent delays and/or phase shifts. Accordingly, the coefficients w may be complex-valued with an imaginary component substantially different from zero.
(45) One form of implementation of the processing of this embodiment is shown 40 in
(46) On the decoding side, if the decoder is configured for headphone playback, the coefficients are extracted 49 and applied 50 to the core decoder signals prior to HCQMF synthesis 51 and reproduction 52. An optional HCQMF analysis filter bank 54 may be required as indicated in
(47) It will be evident that the methods described in the previous paragraphs are not limited to using a quadrature mirror filter banks; as other filter bank structures or transforms can be used equally well, such as a short-term windowed discrete Fourier transforms.
(48) This scheme has various benefits compared to conventional approaches. These can include: 1) The decoder complexity is only marginally higher than the complexity for plain stereo playback, as the addition in the decoder consists of a simple (time and frequency-dependent) matrix only, controlled by bit stream information. 2) The approach is suitable for channel-based and object-based content, and does not depend on the number of objects or channels present in the content. 3) The HRTFs become encoder tuning parameters, i.e. they can be modified, improved, altered or adapted at any time without regard for decoder compatibility. With decoders present in the field, HRTFs can still be optimized or customized without needing to modify decoder-side processing stages. 4) The bit rate is very low compared to bit rates required for multi-channel or object-based content, because only a few loudspeaker signals (typically one or two) need to be conveyed from encoder to decoder with additional (low-rate) data for the coefficients w. 5) The same bit stream can be faithfully reproduced on loudspeakers and headphones. 6) A bit stream may be constructed in a scalable manner; if, in a specific service context, the end point is guaranteed to use loudspeakers only, the transformation coefficients w may be stripped from the bit stream without consequences for the conventional loudspeaker presentation. 7) Advanced codec features operating on loudspeaker presentations, such as loudness management, dialog enhancement, etcetera, will continue to work as intended (when playback is over loudspeakers). 8) Loudness for the binaural presentation can be handled independently from the loudness of loudspeaker playback by scaling of the coefficients w. 9) Listeners using headphones can choose to listen to a binaural or conventional stereo presentation, instead of being forced to listen to one or the other.
(49) Extension with Early Reflections
(50) It is often desirable to include one or more early reflections in a binaural rendering that are the result of the presence of a floor, walls, or ceiling to increase the realism of a binaural presentation. If a reflection is of a specular nature, it can be interpreted as a binaural presentation in itself, in which the corresponding HRIRs include the effect of surface absorption, an increase in the delay, and a lower overall level due to the increased acoustical path length from sound source to the ear drums.
(51) These properties can be captured with a modified arrangement such as that illustrated 60 in
(52) The decoder will generate the anechoic signal pair and the early reflection signal pair by applying coefficients W (W.sub.Y; W.sub.E) to the loudspeaker signals. The early reflection is subsequently processed by a delay stage 68 to simulate the longer path length for the early reflection. The delay parameter of the block 68 can be included in the coder bit stream, or can be a user-defined parameter, or can be made dependent on the simulated acoustic environment, or can be made dependent on the actual acoustic environment the listener is in.
(53) Extension with Late Reverberation
(54) To include the simulation of late reverberation in the binaural presentation, a late-reverberation algorithm can be employed, such as a feedback-delay network (FDN). An FDN takes as input one or more objects and or channels, and produces (in case of a binaural reverberator) two late reverberation signals. In a conventional algorithm, the decoder output (or a downmix thereof) can be used as input to the FDN. This approach has a significant disadvantage. In many use cases, it can be desirable to adjust the amount of late reverberation on a per-object basis. For example, dialog clarity is improved if the amount of late reverberation is reduced.
(55) In an alternative embodiment per-object or per-channel control of the amount of reverberation can be provided in the same way as anechoic or early-reflection binaural presentations are constructed from a stereo mix.
(56) As illustrated in
(57) Additionally, an FDN may be constructed such that, multiple (two or more) inputs are allowed so that spatial qualities of the input signals are preserved at the FDN output. In such cases, coefficient data that allow estimation of each FDN input signal from the loudspeaker presentation are included in the bitstream.
(58) In this case it may be desirable to control the spatial positioning of the object and or channel in respect to the FDN inputs.
(59) In some cases, it may be possible to generate late reverberation simulation (e.g., FDN) input signals in response to parameters present in a data stream for a separate purpose (e.g, parameters not specifically intended to be applied to base signals to generate FDN input signals). For instance, in one exemplary dialog enhancement system, a dialog signal is reconstructed from a set of base signals by applying dialog enhancement parameters to the base signals. The dialog signal is then enhanced (e.g., amplified) and mixed back into the base signals (thus, amplifying the dialog components relative to the remaining components of the base signals). As described above, it is often desirable to construct the FDN input signal such that it does not contain dialog components. Thus, in systems for which dialog enhancement parameters are already available, it is possible to reconstruct the desired dialog free (or, at least, dialog reduced) FDN input signal by first reconstructing the dialog signal from the base signal and the dialog enhancement parameters, and then subtracting (e.g., cancelling) the dialog signal from the base signals. In such a system, dedicated parameters for reconstructing the FDN input signal from the base signals may not be necessary (as the dialog enhancement parameters may be used instead), and thus may be excluded, resulting in a reduction in the required parameter data rate without loss of functionality.
(60) Combining Early Reflections and Late Reverberation
(61) Although extensions of anechoic presentation with early reflection(s) and late reverberation are denoted independently in the previous sections, combinations are possible as well. For example, a system may include: 1) Coefficients W.sub.Y to determine an anechoic presentation from a loudspeaker presentation; 2) Additional coefficients W.sub.E to determine a certain number of early reflections from a loudspeaker presentation; 3) Additional coefficients W.sub.F to determine one or more late-reverberation input signals from a loudspeaker presentation, allowing to control the amount of late reverberation on a per-object basis.
(62) Anechoic Rendering as First Presentation
(63) Although the use of a loudspeaker presentation as a first presentation to be encoded by a core coder has the advantage of providing backward compatibility with decoders that cannot interpret or process the transformation data w, the first presentation is not limited to a presentation for loudspeaker playback.
(64) The anechoic signal Y is optionally converted to the time domain using HCQMF synthesis filterbank 110, and encoded by core encoder 111. The transformation estimation block 114 computes parameters W.sub.F (112) that allow reconstruction of the FDN input signal F from the anechoic presentation Y, as well as parameters W.sub.Z (113) to reconstruct the loudspeaker presentation Z from the anechoic presentation Y. Parameters 112 and 113 are both included in the core coder bit stream. Alternatively, or in addition, although not shown in
(65) The decoder has two operation modes, visualized by decoder mode 102 intended for headphone listening 130, and decoder mode 103 intended for loudspeaker playback 140. In the case of headphone playback, core decoder 115 decodes the anechoic presentation Y and decodes transformation parameters W.sub.F. Subsequently, the transformation parameters W.sub.F are applied to the anechoic presentation Y by matrixing block 116 to produce an estimated FDN input signal, which is subsequently processed by FDN 117 to produce a late reverberation signal. This late reverberation signal is mixed with the anechoic presentation Y by adder 150, followed by HCQMF synthesis filterbank 118 to produce the headphone presentation 130. If parameters W.sub.E are also present, the decoder may apply these parameters to the anechoic presentation Y to produce an estimated early reflection signal, which is subsequently processed through a delay and mixed with the anechoic presentation Y.
(66) In the case of loudspeaker playback, the decoder operates in mode 103, in which core decoder 115 decodes the anechoic presentation Y, as well as parameters W.sub.Z. Subsequently, matrixing stage 116 applies the parameters W.sub.Z onto the anechoic presentation Y to produce an estimate or approximation of the loudspeaker presentation Z. Lastly, the signal is converted to the time domain by HCQMF synthesis filterbank 118 and produced by loudspeakers 140.
(67) Finally, it should be noted that the system of
(68) Multi-Channel Loudspeaker Presentation
(69) It will be appreciated by the person skilled in the art that the first playback stream presentation encoded in the encoder may be a multichannel presentation, e.g. a surround or immersive loudspeaker presentation such as a 5.1, 7.1, 7.1.4, etc. presentation. Embodiments of the invention discussed above where the second playback stream presentation is a stereo presentation, e.g. with reference to
(70) In order to avoid or minimize such increase in computational complexity when a first presentation with M1 channels is transformed to a second presentation with M2 channels, where M1>M2, e.g. when a surround or immersive loudspeaker presentation is transformed to a binaural stereo presentation, it may be advantageous to downmix the first presentation to an intermediate presentation before determining the transform parameters. For example, a 5.1 surround presentation may be downmixed to a 2.0 stereo loudspeaker presentation.
(71)
Z.sub.L=(S.sub.L+a*S.sub.C+b*S.sub.LS+c*S.sub.LFE)
where a, b and c are suitable constants, e.g. a=b=sqrt(0.5)=0.71, c=0.5.
(72) The audio content is also input to a binaural renderer 205 configured to render an anechoic binaural signal Y. A parameter computation block 206 receives the anechoic signal Y and the stereo downmix signal Z and computes stereo-to-anechoic parameters W.sub.Y. Compared to
(73) Further, the encoder may also in include a block 207 (corresponding to block 82 in
(74)
(75) For low target bit-rates it is known to use parametric methods to convey a 5.1 presentation with help of a 2.1 downmix and a set of coupling parameters, see e.g. ETSI TS 103 190-1 V1.2.1 (2015-06). In such a system, the core decoder effectively performs an up-mix in order to provide the decoded 5.1 presentation. If the embodiment in
(76) However, in this context, when a 2.1 presentation is already included in the bit stream, the up-mix to 5.1 is not necessary and can be omitted in order to simplify the decoder. Such a simplified decoder is depicted in
Lo=a*L+b*LFE
Ro=a*R+b*LFE
where L, R and LFE are the left and right full bandwidth channels and the low-frequency effects channel of the decoded 2.1 presentation, a and b are suitable constants, taking the effect of the up-mix and down-mix performed by modules 312 and 212 in
(77) The process described in
(78) Interpretation
(79) Reference throughout this specification to “one embodiment”, “some embodiments” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment”, “in some embodiments” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.
(80) As used herein, unless otherwise specified the use of the ordinal adjectives “first”, “second”, “third”, etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
(81) In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
(82) As used herein, the term “exemplary” is used in the sense of providing examples, as opposed to indicating quality. That is, an “exemplary embodiment” is an embodiment provided as an example, as opposed to necessarily being an embodiment of exemplary quality.
(83) It should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, FIG., or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
(84) Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.
(85) Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.
(86) In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
(87) Similarly, it is to be noticed that the term coupled, when used in the claims, should not be interpreted as being limited to direct connections only. The terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. “Coupled” may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.
(88) Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as falling within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.