Apparatus and method for combining repeated noisy signals

20230217197 · 2023-07-06

    Inventors

    Cpc classification

    International classification

    Abstract

    An apparatus for combining three or more audio signals is described. The apparatus includes a segmentation block for segmenting each audio signal into segments, a weight determination block, which is configured to determine a weight value for each of the temporally weighted audio signal segments, a combination block for combining the temporally weighted audio signal segments of each audio signal, and a synthesis block for generating an output audio signal. A method for combining three or more audio signals and a computer program product are also described.

    Claims

    1. Apparatus for combining three or more audio signals, the apparatus comprising: a segmentation block for segmenting each audio signal, which is configured to dissect each audio signal into a plurality of audio signal segments, each audio signal segment overlapping with adjacent audio signal segments a predetermined percentage of the audio signal segment length, wherein all dissected audio signals comprise corresponding audio signal segment borders, such that each 1st, 2nd, ..., nth audio signal segment of all audio signals comprise the same length, the same start time and the same end time, and to apply an analysis window function to each of the audio signal segments to produce temporally weighted audio signal segments, a weight determination block, which is configured to determine a weight value for each of the temporally weighted audio signal segments, a combination block for combining the temporally weighted audio signal segments of each audio signal, which is configured to calculate a weighted average of all temporally weighted audio signal segments of each audio signal, using the determined weight value of each temporally weighted audio signal segment, and a synthesis block for generating an output audio signal, which is configured to apply a synthesis window function to the combined temporally weighted audio signal segments of each audio signal, and to perform an overlap-add method on the corresponding results of the synthesis window function.

    2. Apparatus according to claim 1, wherein the weight determination block is configured to determine the weight values for the temporally weighted audio signal segments on the basis of a determination of a noise variance estimate value for each of the temporally weighted audio signal segments, or a calculation of a root mean square value of a corresponding difference signal for each of the temporally weighted audio signal segments.

    3. Apparatus according to claim 1, wherein the three or more audio signals are measurements for loudspeaker calibration, preferably one of sweep measurements, in particular preferably exponential sweep measurements, measurements using Maximum Length Sequences, and measurements using acoustic signals, in particular preferably measurements using music.

    4. Apparatus according to claim 1, wherein for each audio signal, all audio signal segments comprise the same length, all audio signal segments comprise the same overlap percentage, and/or the same analysis window function is applied to all audio signal segments.

    5. Apparatus according to claim 1, wherein the overlap percentage is 50 percent, the analysis window function and/or the synthesis window function is one of a cosine function or the square root of any window function with constant-overlap-add property, and/or the analysis window function and the synthesis window function are the same window function.

    6. Apparatus according to claim 1, wherein the product of the analysis window function and the synthesis window function satisfies the constant-overlap-add property.

    7. Apparatus according to claim 1 for calibration of sound systems.

    8. Method for combining three or more audio signals, comprising: segmenting each audio signal, comprising dissecting each audio signal into a plurality of audio signal segments, each audio signal segment overlapping with adjacent audio signal segments a predetermined percentage of the audio signal segment length, wherein all dissected audio signals comprise corresponding audio signal segment borders, such that each 1st, 2nd, ..., nth audio signal segment of all audio signals comprise the same length, the same start time and the same end time, and applying an analysis window function to each of the audio signal segments to produce temporally weighted audio signal segments, determining a weight value for each of the temporally weighted audio signal segments, combining the temporally weighted audio signal segments of each audio signal, comprising calculating a weighted average of all temporally weighted audio signal segments of each audio signal, using the determined weight value of each temporally weighted audio signal segment, and generating an output audio signal, comprising applying a synthesis window function to the combined temporally weighted audio signal segments of each audio signal, and performing an overlap-add method on the corresponding results of the synthesis window function.

    9. Method according to claim 8, wherein the weight values for the temporally weighted audio signal segments are determined on the basis of determining a noise variance estimate value for each of the temporally weighted audio signal segments, or calculating a root mean square value of a corresponding difference signal for each of the temporally weighted audio signal segments.

    10. Method according to claim 8, wherein the three or more audio signals are measurements for loudspeaker calibration, preferably one of sweep measurements, in particular preferably exponential sweep measurements, measurements using Maximum Length Sequences, and/or measurements using acoustic signals, in particular preferably measurements using music.

    11. Method according to claim 8, wherein for each audio signal dissecting is performed using the same length and/or the same overlap percentage for all audio signal segments, and/or the same analysis window function is applied to all audio signal segments.

    12. Method according to claim 8, wherein dissecting is performed using an overlap percentage of 50 percent, the analysis window function and/or the synthesis window function is one of a cosine function or the square root of any window function with constant-overlap-add property, and/or the analysis window function and the synthesis window function are the same window function.

    13. Method according to claim 8, wherein the product of the analysis window function and the synthesis window function satisfies the constant-overlap-add property.

    14. Using the method according to claim 8 for calibrating sound systems.

    15. A non-transitory digital storage medium having a computer program stored thereon to perform the method for combining three or more audio signals, comprising: segmenting each audio signal, comprising dissecting each audio signal into a plurality of audio signal segments, each audio signal segment overlapping with adjacent audio signal segments a predetermined percentage of the audio signal segment length, wherein all dissected audio signals comprise corresponding audio signal segment borders, such that each 1st, 2nd, ..., nth audio signal segment of all audio signals comprise the same length, the same start time and the same end time, and applying an analysis window function to each of the audio signal segments to produce temporally weighted audio signal segments, determining a weight value for each of the temporally weighted audio signal segments, combining the temporally weighted audio signal segments of each audio signal, comprising calculating a weighted average of all temporally weighted audio signal segments of each audio signal, using the determined weight value of each temporally weighted audio signal segment, and generating an output audio signal, comprising applying a synthesis window function to the combined temporally weighted audio signal segments of each audio signal, and performing an overlap-add method on the corresponding results of the synthesis window function, when said computer program is run by a computer.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0050] Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:

    [0051] FIG. 1 shows a schematic flowchart of the method according to embodiments,

    [0052] FIG. 2 shows a schematic representation of segmenting audio signals according to embodiments,

    [0053] FIG. 3 shows schematic input and output audio signals according to embodiments,

    [0054] FIG. 4 shows a schematic illustration of an apparatus according to embodiments, and

    [0055] FIG. 5 shows a schematic illustration of combining segments into an output signal.

    [0056] In the figures, similar reference signs denote similar elements and features.

    DETAILED DESCRIPTION OF THE INVENTION

    [0057] In the following, examples of the present disclosure will be described in detail using the accompanying descriptions. In the following description, many details are described in order to provide a more thorough explanation of examples of the disclosure. However, it will be apparent to those skilled in the art that other examples can be implemented without these specific details. Features of the different examples described can be combined with one another, unless features of a corresponding combination are mutually exclusive or such a combination is expressly excluded.

    [0058] It should be pointed out that the same or similar elements or elements that have the same functionality can be provided with the same or similar reference symbols or are designated identically, with a repeated description of elements that are provided with the same or similar reference symbols or the same are typically omitted. Descriptions of elements that have the same or similar reference symbols or are labeled the same are interchangeable.

    [0059] In the presented technique three or more audio signals are combined. The audio signals represent exemplary repeated noisy signals, which can be for example the repeated measurements of a sound system or an element thereof. As described before, for measuring of the transfer function of such an element, for example a loudspeaker, in an anechoic environment or in a reverberant room, the recorded signal, recorded for example via a microphone, which captures the test signal is degraded by additive noise.

    [0060] The audio signals represent repeated measurements of the transfer function, i.e. the output of the sound element. Therein especially non-stationary noise like clicks and pops, footsteps, slamming doors, or fluctuating background noise can be detrimental to the measuring and thus have a negative effect on a calibration that is to be performed with the measurements. Such a calibration can be performed with consecutive measurements and following adjustment of sound parameters. Other calibration methods are also possible.

    [0061] Reducing aforementioned noise improves the accuracy of the measurement and by that leads to better calibration results.

    [0062] The repeated measurements, can for example be sweep measurements. It has been found that exponential sweep measurements are in particular useful. Alternative measuring techniques include measurements using Maximum Length Sequences and/or measurements using acoustic signals. It has been found that in particular music is a very unobtrusive acoustic signal for measuring the transfer function of a sound element. Such measurements are repeated a few times, wherein at least 3 repetitions are required for the presented technique.

    [0063] FIG. 1 shows a schematic flowchart of an embodiment of the presented technique. Method 100 is described in the following in more detail.

    [0064] Method 100 starts with step 110, which is the segmentation step. Segmentation step 110 segments each audio signal 210, ..., 250 into segments.

    [0065] FIG. 2 shows symbolically three such measurements 210, 220, and 230, in the following also referred to as audio signals A, B, and C. As indicated before, more than three measurements are also possible, even if not depicted in the figures.

    [0066] Segmentation step 110 comprises dissecting each audio signal into a plurality of audio signal segments. As an example, FIG. 2 shows that audio signal A 210 is dissected into segments S.sub.A1, ... S.sub.A5, which are also referred to by the reference signs 211, ... 215.

    [0067] Each audio signal is dissected in sub-step 111 such that each segment of the audio signal overlaps with adjacent segments a predetermined percentage of the segment length. Of course, the first and last segment can only overlap unilaterally.

    [0068] All audio signals are dissected in the same way, that is, the same segmentation is used for all audio signals, such that all dissected audio signals have corresponding segment borders, that is, each 1.sup.st, 2.sup.nd, ..., n.sup.th segment of all audio signals have the same length, the same start time and the same end time. The corresponding segment borders are shown in FIG. 2 at 0, 400, 600, 900, 1200, and 1600 ms and are indicated by the vertical lines over audio signals 210, 220, 230 and audio signal segments 211, 212, 213, 214, and 215.

    [0069] Optionally, each of the audio signals is dissected using the same length for all segments. If this is applied, S.sub.A1 through S.sub.A5 would be of the same length. This is not depicted in the figures. Since all audio signals are dissected similarly, thereby all segments of all audio signals have the same length. That means, if an analogue denomination is used for the other audio signals, B and C, S.sub.B1 through S.sub.B5 and S.sub.C1 through S.sub.C5 would then have the same length as S.sub.A1 through S.sub.A5. S.sub.B1, ... , S.sub.B5, S.sub.C1, ... S.sub.C5 are not shown in the Figures.

    [0070] Optionally, the segments of each audio signal can have the same overlap percentage. FIG. 2 already shows this for the easy of description, namely 50% overlap. For instance, segment S.sub.A2 has a length of 200 ms. The depicted overlap of 50% means that 50% of the length overlap with S.sub.A1 and that 50% of the length overlap with S.sub.A3. In the depicted case, the overlap to either side is thus 100 ms or 0.1 seconds. Overlap percentages other than 50% can be used as well. Either the same overlap percentage is used for all segments of all audio signals. Or the same overlap percentage is used for each n.sup.th segment of all audio signals. As an example, S.sub.A1, S.sub.B1, and S.sub.C1 (in short S.sub.X1) could have 35% overlap, S.sub.A2, S.sub.B2, and S.sub.C2 (in short S.sub.X2) could have 55% overlap, and so on....

    [0071] In sub-step 112 of the segmentation step 110 an analysis window function is applied to each of the audio signal segments. Thereby temporally weighted audio signal segments are produced.

    [0072] As stated above, since all audio signals are dissected similarly, the analysis window function for the n.sup.th segment of each audio signal is the same. However, each segment within an audio signal can have an individual analysis window function. That means, segments S.sub.X1 can have a different analysis window function than segments S.sub.X2. And so on. Optionally, the analysis window function for some or all segments of one audio signal (and thus for the corresponding segments in the other audio signals) can be the same.

    [0073] Further, the analysis window function can be a cosine function. Alternatively, the analysis window function can be a square root of a constant-overlap-add property window function, and other window function can be used as well. Constant-overlap-add is also referred to as COLA.

    [0074] A COLA window is a window function w(t) which fulfills the COLA constraint in equation (1), where T.sub.S denotes the frame shift of the periodically applied window.

    [00001].Math.k=wtkTs=1

    A function which fulfills this constraint is the rectangular window of length T.sub.S, as can be seen in equations (2) and 3.

    [00002]rst=rectt/Ts

    [00003]rectt=1,ift12,120,else

    Returning to the method, by segmentation step 110, and in particular by sub-step 112, each segment is transformed into a temporally weighted audio signal segment.

    [0075] In other words, the segmentation dissects each repeated recording into overlapping segments and applies a window function. In one embodiment a cosine window is used as window function. 50% overlap is an advantageous embodiment. In order to have time-aligned processing, the same segmentation is used for all repeated measurements.

    [0076] In determination step 120, a weight value for each of the temporally weighted audio signal segments is determined. This can also be done individually for each segment of each audio signal.

    [0077] As one option, the weight values for the segments are determined on the basis of determining a noise variance estimate value for each of the temporally weighted audio signal segments.

    [0078] In more detail, each segment can be modeled as x.sub.n(t) = s(t) + n.sub.n(t) where s(t) denotes the clean signal and n.sub.n(t)denotes the additive Gaussian noise of the n.sup.th repetition. It can be assumed that the noise signals are statistically independent. Hence, for any pair <i,j> of repetitions the computation of the variance

    [00004]σi,j2

    of the difference signal results in equation (4) for the two involved variance estimates

    [00005]σ^l2

    and

    [00006]σ^J2.

    [00007]σ^l2+σ^J2=σi,j2

    In order to determine these estimates, a linear equation system can be constructed according to equation (5).

    [00008]Av=b

    [0079] Therein, the pair matrix A is constructed according to the following pseudo code:

    TABLE-US-00001 A = zeros(M,N)    k = 0    for i = 1 ... N-1      for j = i+1 ... N         k = k + 1         A(k,i) = 1         A(k,j) = 1      end    end

    [0080] Therein, N denotes the number of repetitions and M = N (N-1) / 2 denotes the number of pairs. Vector b on the right-hand side of the linear equation system (5) contains the variances

    [00009]σi,j2

    and is constructed according to the following pseudo code:

    TABLE-US-00002 b = zeros(M,1)    k=0    for i = 1 ... N-1      for j = i+1 ... N         k = k + 1         b(k) = E{|x.sub.i(t)-x.sub.j(t)|.sup.2}      end    end

    [0081] Vector

    [00010]v=σ^12,.Math.,σ^N2T

    contains the unknown variance estimates. Since the linear equation system is over-determined, the Moore-Penrose inverse A.sup.+ = (A.sup.TA).sup.-1A.sup.T can be used to determine the variance estimates in the minimum mean square error sense according to equation (6).

    [00011]v=A+b

    Alternatively, the weight values for the segments are determined on the basis of calculating a root mean square value of a corresponding difference signal for each of the temporally weighted audio signal segments. The difference signal is determined as in the example described, only that the root is extracted and the calculation is continued after that.

    [0082] Method 100 then proceeds with the combining step 130, which combines the temporally weighted audio signal segments of each audio signal. This is done individually for each audio signal. The temporally weighted audio signal segments are combined by calculating, in sub-step 131, a weighted average of all temporally weighted audio signal segments of each audio signal, using the determined weight value of each temporally weighted audio signal segment.

    [0083] Each repeated segment is optimally combined to the de-noised segment y(t) by a weighted average according to equation (7).

    [00012]yt=.Math.n=1Nwnxnt

    Therein the weights w.sub.n for the current segment can be derived, as discussed as one option above, directly from the noise variance estimates for this segment, according to equation (8).

    [00013]wn=1/σ^n2.Math.k=1N1/σ^k2

    As discussed above, alternative, the weights can be determined on the basis of calculating a root mean square value of a corresponding difference signal for each of the temporally weighted audio signal segments.

    [0084] After the individual audio signals 210, ..., 250 are re-combined from the modified segments, an output signal 260 is generated in generation step 140. Therein the output audio signal is generated by applying a synthesis window function to the combined segments of each audio signal in sub-step 141. After that, in sub-step 142, an overlap-add method is performed on the corresponding results of the synthesis window function. Thereby the output audio signal is generated.

    [0085] Similar to the description of the analysis window function, since all audio signals are dissected similarly, the synthesis window function is also applied similarly for all audio signals. That means, for the n.sup.th segment of each audio signal the synthesis window function is the same.

    [0086] However, each segment within an audio signal can have an individual analysis window function, and therefore also an individual synthesis window function. That means, segments S.sub.X1 can have a different synthesis window function than segments S.sub.X2. And so on. Optionally, the synthesis window function for some or all segments of one audio signal (and thus for the corresponding segments in the other audio signals) can be the same.

    [0087] Further, the synthesis window function can be a cosine function. Alternatively, the synthesis window function can be a square root of a constant-overlap-add property window function, and other window function can be used as well.

    [0088] In general terms, onto each segment S.sub.XY an analysis window function A.sub.XY is applied in segmentation step 110. In generation step 140 onto each segment S.sub.XY a synthesis window function SY.sub.XY is applied. As detailed above, all n.sup.th segments S.sub.X1 will have the same analysis window function and thus the same synthesis window function as well.

    [0089] However, the analysis window function and the synthesis window function A.sub.XY and SY.sub.XY can also be the same window function for some or all of the segments.

    [0090] Finally, some or all of the window function pairs analysis window function and the synthesis window function A.sub.XY and SY.sub.XY can be chosen such that the product of the analysis window function and the synthesis window function satisfies the constant-overlap-add property.

    [0091] This is also satisfied, by example, by using a Hann or Hamming window as the analysis window, and no synthesis window, or to be more exact to use an identity function as the synthesis window.

    [0092] In other words, the final output signal 260 is generated by applying a synthesis window to the combined signal segments y(t) and performing an overlap-add method. In an advantageous embodiment, a cosine window is used in the segmentation step, and the same window function is used again in the generation step to achieve constant overlap add property.

    [0093] FIG. 3 shows an example according to embodiments of the presented technique with 5 repetitions, i.e. audio signals, which can for example be simulated recordings. The audio signals contain as an example non-stationary signal degradation, shown in inputs 1 through 4, 210, ... 240, and different noise levels, shown in input 5 250. Output signal 260 is shown as the result. Each of the signals are shown with the x-axis indicating time in seconds, and the y-axis indicating x(t).

    [0094] FIG. 4 shows an apparatus 400 for combining three or more audio signals 210, ..., 250. These audio signals 210, ..., 250 are for example repeated measurements of a sound system. The apparatus comprises a segmentation block 410. The segmentation block 410 segments or dissects each audio signal 210, ..., 250 into a plurality of segments 211, ..., 215. The dissection is performed such that each segment overlaps with adjacent segments a predetermined percentage of the segment length. Of course, the first and last segment can only overlap unilaterally. The same segmentation is used for all audio signals, such that all dissected audio signals have corresponding segment borders, that is, each 1.sup.st, 2.sup.nd, ..., n.sup.th segment of all audio signals have the same length, the same start time and the same end time. The segmentation block further is configured to apply an analysis window function to each of the audio signal segments. This can be performed for each segment of each audio signal individually. Thereby, each segment is transformed into a temporally weighted audio signal segment.

    [0095] The apparatus further comprises a weight determination block 420, which is configured to determine a weight value for each of the temporally weighted audio signal segments. This can also be done individually for each segment of each audio signal.

    [0096] The apparatus further comprises a combination block 430 for combining the temporally weighted audio signal segments of each audio signal. This can be done individually for each audio signal. The combination is performed by calculating a weighted average of all temporally weighted audio signal segments of each audio signal, using the determined weight value of each temporally weighted audio signal segment.

    [0097] The apparatus also comprises a synthesis block 440 for generating an output audio signal. The synthesis block is configured to apply a synthesis window function to the combined segments of each audio signal, and to perform an overlap-add method on the corresponding results of the synthesis window function. Thereby the output audio signal is generated.

    [0098] FIG. 5 shows an example of the effects the method has on an audio signal 510. First audio signal 510 is dissected (sub-step 111 of above) into segments, starting with k. The segments are referred to by 511, ..., 514, and the segments overlap as is shown schematically with an overlap of 50%. Then an analysis window function is applied (sub-step 112 of above) in 520, ..., 550 to each of the audio signal segments to produce temporally weighted audio signal segments 521, ..., 524. These temporally weighted audio signal segments 521, ..., 524 are then combined again using the weights which have been determined (step 120 of above) in the meantime or before the combining, to form the processed audio signal 560.

    [0099] If every audio signal has been processed in this manner, the processed audio signals are then combined again (step 130 of above, not shown in FIG. 5) to form the output signal.

    [0100] Above described method and apparatus can be used for calibrating sound systems.

    [0101] In summary, the presented technique takes repeated audio signals, like exponential sweep measurements which are repeated a few times (at least 3 times), and as one embodiment consecutively estimates short-term variances

    [00014]σ^n2

    of the additive noise for each repetition. The time-varying variance estimates are then used to combine the repeated measurements in a minimum mean square error sense using a weighted average.

    [0102] It is advantageous, if one (or more) of the repeated audio signals, i.e. sweep recordings, exhibits significantly greater noise variance than other recordings at a given time, then a significantly smaller weight will be used for this (these) signal segment. As a consequence, the presented method can deal very well with non-stationary noise. FIG. 3 illustrates this.

    [0103] In contrast to this presented technique, conventional methods cannot deal very well with non-stationary noise. If the recorded sweep contained some unexpected background noise, the measurement had to be done again.

    [0104] To conclude, the embodiments described herein can optionally be supplemented by any of the important points or aspects described here. However, it is noted that the important points and aspects described here can either be used individually or in combination and can be introduced into any of the embodiments described herein, both individually and in combination.

    [0105] Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a device or a part thereof corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding apparatus or part of an apparatus or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus.

    [0106] Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.

    [0107] Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.

    [0108] Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine-readable carrier.

    [0109] Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine-readable carrier.

    [0110] In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.

    [0111] A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.

    [0112] A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.

    [0113] A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.

    [0114] A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.

    [0115] A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.

    [0116] In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus.

    [0117] The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.

    [0118] The apparatus described herein, or any components of the apparatus described herein, may be implemented at least partially in hardware and/or in software.

    [0119] The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.

    [0120] The methods described herein, or any parts of the methods described herein, may be performed at least partially by hardware and/or by software.

    [0121] While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.