PACKET LOSS CONCEALMENT FOR SPEECH CODING

20180012606 · 2018-01-11

Assignee

Inventors

Cpc classification

International classification

Abstract

A speech coding method of reducing error propagation due to voice packet loss, is achieved by limiting or reducing a pitch gain only for the first subframe or the first two subframes within a speech frame, the excitation of a next frame is obtained according to the reduced or limited pitch gain value of the first subframe, and the next frame is encoded according to the obtained excitation. The method is used for a voiced speech class.

Claims

1. A method for encoding an audio signal, wherein the audio signal is encoded frame-by-frame by an encoder, and each frame comprises a plurality of subframes, the method comprising: for a current frame that is to be encoded, obtaining an excitation of the current frame according to a reduced or limited pitch gain value of a first subframe of a previous frame, wherein the current frame is successive to the previous frame, and wherein the reduced or limited pitch gain value of the first subframe of the previous frame is obtained by reducing or limiting an initial pitch gain value of the first subframe of the previous frame; and encoding the current frame of the digital audio signal according to the excitation of the current frame.

2. The method of claim 1, wherein the reduced or limited pitch gain value of the first subframe of the previous frame is smaller than the initial pitch gain value of the first subframe, and wherein reducing or limiting the initial pitch gain value of the first subframe to obtain the reduced or limited pitch gain value of the first subframe comprises: multiplying a scaling factor to the initial pitch gain value of the first subframe to obtain the reduced or limited pitch gain value of the first subframe, wherein the scaling factor is smaller than 1 and greater than 0.

3. The method of claim 1, wherein the reduced or limited pitch gain value of the first subframe is smaller than 1.

4. The method of claim 1, further comprising: inputting the excitation of the current frame to a Linear Prediction or Short-Term Prediction filter.

5. The method of claim 1, wherein the encoder is a voice over internet protocol (VOIP) device.

6. The method of claim 1, wherein the encoder is a part of a transmitting audio device that transmits broadcast quality, high fidelity audio data, streaming audio data, or an audio signal that accompanies video programming.

7. The method of claim 1, wherein the encoder is a handset device, a dedicated hardware component, or a computing device.

8. The method of claim 1, wherein the encoded audio signal is transmitted to a wide area network (WAN), a public switched telephone network (PSTN), or the Internet.

9. An apparatus, comprising: an audio signal input interface, configured to receive a sound signal and convert the sound signal into a digital audio signal, wherein the digital audio signal comprises multiple frames, and each frame comprises a plurality of subframes; and a processor for encoding the digital audio signal frame-by-frame, wherein the processor is configured to: for a current frame that is to be encoded, obtain an excitation of the current frame according to a reduced or limited pitch gain value of a first subframe of a previous frame, wherein the current frame is successive to the previous frame, and wherein the reduced or limited pitch gain value of the first subframe of the previous frame is obtained by reducing or limiting an initial pitch gain value of the first subframe of the previous frame; and encode the current frame of the digital audio signal according to the excitation of the current frame.

10. The apparatus of claim 9, wherein the reduced or limited pitch gain value of the first subframe of the previous frame is smaller than the initial pitch gain value of the first subframe, and wherein in reducing or limiting the initial pitch gain value of the first subframe to obtain the reduced or limited pitch gain value of the first subframe, the processor is configured to: multiply a scaling factor to the initial pitch gain value of the first sub-frame to obtain the reduced or limited pitch gain value of the first subframe, wherein the scaling factor is smaller than 1 and greater than 0.

11. The apparatus of claim 9, wherein the reduced or limited pitch gain value of the first subframe is smaller than 1.

12. The apparatus of claim 9, wherein the processor is further configured to: input the excitation of the current frame to a Linear Prediction or Short-Term Prediction filter.

13. The apparatus of claim 9, wherein the apparatus is a voice over internet protocol (VOIP) device.

14. The apparatus of claim 9, wherein the apparatus is a part of a transmitting audio device that transmits broadcast quality, high fidelity audio data, streaming audio data, or an audio signal that accompanies video programming.

15. The apparatus of claim 9, wherein the apparatus is a handset device, a dedicated hardware component, or a computing device.

16. The apparatus of claim 9, further comprising a network interface, wherein the network interface is configured to output the encoded digital audio signal to a wide area network (WAN), a public switched telephone network (PSTN), or the Internet.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0025] The features and advantages of the present invention will become more readily apparent to those ordinarily skilled in the art after reviewing the following detailed description and accompanying drawings, wherein:

[0026] FIG. 1 shows an initial CELP encoder.

[0027] FIG. 2 shows an initial decoder which adds the post-processing block.

[0028] FIG. 3 shows a basic CELP encoder which realized the long-term linear prediction by using an adaptive codebook.

[0029] FIG. 4 shows a basic decoder corresponding to the encoder in FIG. 3.

[0030] FIG. 5 shows an example that a pitch period is smaller than a subframe size.

[0031] FIG. 6 shows an example with which a pitch period is larger than a subframe size and smaller than a half frame size.

[0032] FIG. 7 shows an encoder based on an analysis-by-synthesis approach.

[0033] FIG. 8 shows a decoder corresponding to the encoder in FIG. 7.

[0034] FIG. 9 illustrates a communication system according to an embodiment of the present invention.

DETAILED DESCRIPTION

[0035] The making and using of the embodiments are discussed in detail below. It should be appreciated, however, that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use the invention, and do not limit the scope of the invention.

[0036] The present invention will be described with respect to various embodiments in a specific context, a system and method for speech/audio coding and decoding. Embodiments of the invention may also be applied to other types of signal processing. The present invention discloses a switched long-term pitch prediction approach which improves packet loss concealment. The following description contains specific information pertaining to the CELP Technique. However, one skilled in the art will recognize that the present invention may be practiced in conjunction with various speech coding algorithms different from those specifically discussed in the present application. Moreover, some of the specific details, which are within the knowledge of a person of ordinary skill in the art, are not discussed to avoid obscuring the present invention.

[0037] The drawings in the present application and their accompanying detailed description are directed to merely example embodiments of the invention. To maintain brevity, other embodiments of the invention which use the principles of the present invention are not specifically described in the present application and are not specifically illustrated by the present drawings.

[0038] FIG. 1 shows an initial CELP encoder where a weighted error 109 between a synthesized speech 102 and an original speech 101 is minimized often by using a so-called analysis-by-synthesis approach. W(z) is an error weighting filter 110. 1/B(z) is a long-term linear prediction filter 105; 1/A(z) is a short-term linear prediction filter 103. The code-excitation 108, which is also called fixed codebook excitation, is scaled by a gain G.sub.c 107 before going through the linear filters. The short-term linear filter 103 is obtained by analyzing the original signal 101 and represented by a set of coefficients:

[00001] A ( z ) = .Math. i = 1 P .Math. 1 + a i .Math. z - i , i = 1 , 2 , .Math. .Math. , P ( 1 )

[0039] The weighting filter 110 is somehow related to the above short-term prediction filter. A typical form of the weighting filter could be

[00002] W ( z ) = A ( z / α ) A ( z / β ) , ( 2 )

[0040] where β<α, 0<β<1, 0<α≦1. The long-term prediction 105 depends on pitch and pitch gain; a pitch can be estimated from the original signal, residual signal, or weighted original signal. The long-term prediction function in principal can be expressed as


B(z)=1−β.Math.z.sup.−Pitch.   (3)

[0041] The code-excitation 108 normally consists of pulse-like signal or noise-like signal, which are mathematically constructed or saved in a codebook. Finally, the code-excitation index, quantized gain index, quantized long-term prediction parameter index, and quantized short-term prediction parameter index are transmitted to the decoder.

[0042] FIG. 2 shows an initial decoder which adds a post-processing block 207 after the synthesized speech 206. The decoder is a combination of several blocks which are code-excitation 201, a long-term prediction 203, a short-term prediction 205 and post-processing 207. Every block except the post-processing has the same definition as described in the encoder of FIG. 1. The post-processing could further consist of a short-term post-processing and a long-term post-processing.

[0043] FIG. 3 shows a basic CELP encoder which realizes the Long-Term Prediction by using an adaptive codebook 307, e.sub.p(n), containing a past synthesized excitation 304. A periodic pitch information is employed to generate the adaptive component of the excitation. This excitation component is then scaled by a gain 305 (G.sub.p, also called pitch gain). The code-excitation 308, e.sub.c(n), is scaled by a gain G.sub.c 306. The two scaled excitation components are added together before going through the short-term linear prediction filter 303. The two gains (G.sub.p and G.sub.c) need to be quantized and then sent to a decoder.

[0044] FIG. 4 shows a basic decoder corresponding to the encoder in FIG. 3, which adds a post-processing block 408 after the synthesized speech 407. This decoder is similar to FIG. 2 except the adaptive codebook 401. The decoder is a combination of several blocks which are the code-excitation 402, the adaptive codebook 401, the short-term prediction 406 and the post-processing 408. Every block except the post-processing has the same definition as described in the encoder of FIG. 3. The post-processing could further consist of a short-term post-processing and a long-term post-processing.

[0045] FIG. 7 shows a basic encoder based on an analysis-by-synthesis approach, which generates a Long-Term Prediction excitation component 707, e.sub.p(n), containing a past synthesized excitation 704. A periodic pitch information is employed to generate the LTP excitation component of the excitation. This LTP excitation component is then scaled by a gain 705 (G.sub.p, also called pitch gain). The second excitation component 708, e.sub.c(n), is scaled by a gain G.sub.c 706. The two scaled excitation components are added together before going through the short-term linear prediction filter 703. The two gains (G.sub.p and G.sub.c) need to be quantized and then sent to a decoder.

[0046] FIG. 8 shows a basic decoder corresponding to the encoder in FIG. 7, which adds a post-processing block 808 after the synthesized speech 807. This decoder is similar to FIG. 4 except the two excitation components 801 and 802 are expressed in a more general notations. The decoder is a combination of several blocks which are the second excitation component 802, the LTP excitation component 801, the short-term prediction 806 and the post-processing 808. Every block except the post-processing has the same definition as described in the encoder of FIG. 7. The post-processing could further consist of a short-term post-processing and a long-term post-processing.

[0047] FIG. 3 and FIG. 7 illustrate examples capable of embodying the present invention. With reference to FIG. 3, FIG. 4, FIG. 7 and FIG. 8, the long-term prediction plays an important role for voiced speech coding because voiced speech has strong periodicity. The adjacent pitch cycles of voiced speech are similar to each other, which means mathematically the pitch gain G.sub.p in the following excitation express is very high,


e(n)=G.sub.p.Math.e.sub.p(n)+G.sub.c.Math.e.sub.c(n)   (4)

[0048] where e.sub.p(n) is one subframe of sample series indexed by n, coming from the adaptive codebook 307 or the LTP excitation component 707 which consists of the past excitation 304 or 704; e.sub.c(n) is from the code-excitation codebook 308 (also called fixed codebook) or the second excitation component 708 which is the current excitation contribution. For voiced speech, the contribution of e.sub.p(n) from the adaptive codebook 307 or the LTP excitation component 707 could be dominant and the pitch gain G.sub.p 305 or 705 is around a value of 1. The excitation is usually updated for each subframe. Typical frame size is 20 milliseconds and typical subframe size is 5 milliseconds. If a previous bit-stream packet is lost and the pitch gain G.sub.p is high, an incorrect estimate of the previous synthesized excitation can cause error propagation for quite long time after the decoder has already received a correct bit-stream packet. The partial reason of this error propagation is that the phase relationship between e.sub.p(n) and e.sub.c(n) has been changed due to the previous bit-stream packet loss. One simple solution to solve this issue is just to completely cut (remove) the pitch contribution between frames; this means the pitch gain G.sub.p 305 or 705 is set to zero in the encoder. Although this kind of solution solved the error propagation problem, it sacrifices too much quality when there is no bit-stream packet loss or it requires much higher bit rate to achieve the same quality as the LTP is used. The invention explained in the following will provide a compromised solution.

[0049] For most voiced speech, one frame contains several pitch cycles. FIG. 5 shows an example that a pitch period 503 is smaller than a subframe size 502. FIG. 6 shows an example with which a pitch period 603 is larger than a subframe size 602 and smaller than a half frame size. If the speech is very voiced, a compromised solution to avoid the error propagation due to the transmission packet loss while still profiting from the significant long-term prediction gain is to limit the pitch gain maximum value for the first pitch cycle of each frame; equivalently, the energy of the LTP excitation component is reduced for the first pitch cycle of each frame or for the first subframe of each frame; when the pitch lag is much longer than the subframe size, the energy of the LTP excitation component can be reduced for the first subframe or for the first two subframes of each frame. Speech signal can be classified into different cases and treated differently. The following example assumes that a valid speech signal is classified into 4 classes:

[0050] Class 1: (strong voiced) and (pitch<=subframe size). For this frame, the pitch gain of the first subframe is reduced or limited to a value (let's say around 0.5) smaller than 1; obviously, the limitation or reduction of the pitch gain can be realized by multiplying a gain factor (which is smaller than 1) with the pitch gain or by subtracting a value from the pitch gain; equivalently, the energy of the LTP excitation component can be reduced for the first subframe by multiplying an additional gain factor which is smaller than 1. For the first subframe, the code-excitation codebook size could be larger than the other subframes within the same frame, or one more stage of excitation component is added only for the first subframe, in order to compensate for the lower pitch gain of the first subframe; in other words, the bit rate of the second excitation component for the first subframe is set to be higher than the bit rate of the second excitation component for the other subframes within the same frame. For the other subframes rather than the first subframe, a regular CELP algorithm or a regular analysis-by-synthesis algorithm is used, which minimizes a coding error or a weighted coding error in a closed loop. As this is a strong voiced frame, the pitch track is stable (the pitch lag is changed slowly or smoothly from one subframe to the next subframe) and the pitch gains are high within the frame so that the pitch lags and the pitch gains can be encoded more efficiently with less number of bits, for example, coding the pitch lags and/or the pitch gains differentially from one subframe to the next subframe within the same frame.

[0051] Class 2: (strong voiced) and (pitch>subframe & pitch<=half frame). For this frame, the pitch gains of the first two subframes (half frame) are reduced or limited to a value (let's say around 0.5) smaller than 1; obviously, the limitation or reduction of the pitch gains can be realized by multiplying a gain factor (which is smaller than 1) with the pitch gains or by subtracting a value from the pitch gains; equivalently, the energy of the LTP excitation component can be reduced for the first two subframes by multiplying an additional gain factor which is smaller than 1. For the first two subframes, the code-excitation codebook size could be larger than the other subframes within the same frame, or one more stage of excitation component is added only for the first half frame, in order to compensate for the lower pitch gains; in other words, the bit rate of the second excitation component for the first two subframes is set to be higher than the bit rate of the second excitation component for the other subframes within the same frame. For the other subframes rather than the first two subframes, a regular CELP algorithm or a regular analysis-by-synthesis algorithm is used, which minimizes a coding error or a weighted coding error in a closed loop. As this is a strong voiced frame, the pitch track is stable (the pitch lag is changed slowly or smoothly from one subframe to the next subframe) and the pitch gains are high within the frame so that the pitch lags and the pitch gains can be encoded more efficiently with less number of bits, for example, coding the pitch lags and/or the pitch gains differentially from one subframe to the next subframe within the same frame.

[0052] Class 3: (strong voiced) and (pitch>half frame). When the pitch lag is long, the error propagation effect due to the long-term prediction is less significant than the short pitch lag case. For this frame, the pitch gains of the subframes covering the first pitch cycle are reduced or limited to a value smaller than 1; the code-excitation codebook size could be larger than regular size, or one more stage of excitation component is added, in order to compensate for the lower pitch gains. Since a long pitch lag causes a less error propagation and the probability of having a long pitch lag is relatively small, just a regular CELP algorithm or a regular analysis-by-synthesis algorithm can be also used for the entire frame, which minimizes a coding error or a weighted coding error in a closed loop. As this is a strong voiced frame, the pitch track is stable and the pitch gains are high within the frame so that they can be coded more efficiently with less number of bits.

[0053] Class 4: all other cases rather than Class 1, Class 2, and Class 3. For all the other cases (exclude Class 1, Class 2, and Class 3), a regular CELP algorithm or a regular analysis-by-synthesis algorithm can be used, which minimizes a coding error or a weighted coding error in a closed loop. Of course, for some specific frames such as unvoiced speech or background noise, an open-loop approach or an open-loop/closed-loop combined approach can be used; the details will not be discussed here as this subject is already out of the scope of this application.

[0054] The class index (class number) assigned above to each defined class can be changed without changing the result. For example, the condition (strong voiced) and (pitch<=subframe size) can be defined as Class 2 rather than Class 1; the condition (strong voiced) and (pitch>subframe & pitch<=half frame) can be defined as Class 3 rather than Class 2; etc.

[0055] In general, the error propagation effect due to speech packet loss is reduced by adaptively diminishing or reducing pitch correlations at the boundary of speech frames while still keeping significant contributions from the long-term pitch prediction.

[0056] In some embodiments, a method of improving packet loss concealment for speech coding while still profiting from a pitch prediction or LTP, the method comprising: having an LTP excitation component; having a second excitation component; determining an initial energy of the LTP excitation component for every subframe within a frame of speech signal by using a regular method of minimizing a coding error or a weighted coding error at an encoder; reducing or limiting the energy of the LTP excitation component to be smaller than the initial energy of the LTP excitation component for the first subframe within the frame; keeping the energy of the LTP excitation component to be equal to the initial energy of the LTP excitation component for any other subframe rather than the first subframe within the frame; encoding the energy of the LTP excitation component for every subframe of the frame at the encoder; and forming an excitation by including the LTP excitation component and the second excitation component.

[0057] Encoding the energy of the LTP excitation component comprises encoding a gain factor which is limited or reduced to the value for the first subframe to be smaller than 1. Coding quality loss due to the gain factor reduction is compensated by increasing coding bit rate of the second excitation component of the first subframe to be larger than coding bit rate of the second excitation component of any other subframe within the frame. Coding quality loss due to the gain factor reduction can also be compensated by adding one more stage of excitation component to the second excitation component for the first subframe rather than the other subframes within the frame. The energy limitation or reduction of the LTP excitation component for the first subframe within the frame is employed for voiced speech and not for unvoiced speech.

[0058] In other embodiments, a method of improving packet loss concealment for speech coding while still profiting from a pitch prediction or LTP, the method comprising: classifying a plurality of speech frames into a plurality of classes; and at least for one of the classes, the following steps are included: having an LTP excitation component; having a second excitation component; determining an initial energy of the LTP excitation component for every subframe within a frame of speech signal by using a regular method of minimizing a coding error or a weighted coding error at an encoder; comparing a pitch cycle length with a subframe size within a speech frame; reducing or limiting the energy of the LTP excitation component to be smaller than the initial energy of the LTP excitation component for the first subframe or the first two subframes within the frame, depending on the pitch cycle length compared to the subframe size; keeping the energy of the LTP excitation component to be equal to the initial energy of the LTP excitation component for any other subframe rather than the first subframe or the first two subframes within the frame; encoding the energy of the LTP excitation component for every subframe of the frame at the encoder; and forming an excitation by including the LTP excitation component and the second excitation component.

[0059] Encoding the energy of the LTP excitation component comprises encoding a gain factor which is limited or reduced to the value for the first subframe to be smaller than 1. Coding quality loss due to the gain factor reduction is compensated by increasing coding bit rate of the second excitation component of the first subframe or the first two subframes to be larger than coding bit rate of the second excitation component of any other subframe within the frame. Coding quality loss due to the gain factor reduction can also be compensated by adding one more stage of excitation component to the second excitation component for the first subframe or the first two subframes rather than the other subframes within the frame. The energy limitation or reduction of the LTP excitation component for the first subframe or the first two subframes within the frame is employed for voiced speech and not for unvoiced speech.

[0060] In other embodiments, a method of improving packet loss concealment for speech coding while still profiting from a pitch prediction or LTP, the method comprising: classifying a plurality of speech frames into a plurality of classes; and at least for one of the classes, the following steps are included: having an LTP excitation component; having a second excitation component; deciding a first subframe size based on a pitch cycle length within a speech frame; determining an initial energy of the LTP excitation component for every subframe within a frame of speech signal by using a regular method of minimizing a coding error or a weighted coding error at an encoder; reducing or limiting the energy of the LTP excitation component to be smaller than the initial energy of the LTP excitation component for the first subframe within the frame; keeping the energy of the LTP excitation component to be equal to the initial energy of the LTP excitation component for any other subframe rather than the first subframe within the frame; encoding the energy of the LTP excitation component for every subframe of the frame at the encoder; and forming an excitation by including the LTP excitation component and the second excitation component. Encoding the energy of the LTP excitation component comprising encoding a gain factor.

[0061] The initial energy of the LTP excitation component and the second excitation component are determined by using an analysis-by-synthesis approach. An example of the analysis-by-synthesis approach is CELP methodology.

[0062] In other embodiments, a method of efficiently encoding a voiced frame, the method comprising: classifying a plurality of speech frames into a plurality of classes; and at least for one of the classes, the following steps are included: having an LTP excitation component; having a second excitation component; encoding an energy of the LTP excitation component by encoding a pitch gain; checking if a pitch track or pitch lags within the voiced frame are stable from one subframe to a next subframe; checking if the voiced frame is strongly voiced by checking if pitch gains within the voiced frame are high; encoding the pitch lags or the pitch gains efficiently by a differential coding from one subframe to a next subframe if the voiced frame is strongly voiced and the pitch lags are stable; and forming an excitation by including the LTP excitation component and the second excitation component. The energy of the LTP excitation component and the second excitation component can be determined by using an analysis-by-synthesis approach, which can be a CELP methodology.

[0063] FIG. 9 illustrates a communication system 10 according to an embodiment of the present invention. Communication system 10 has audio access devices 6 and 8 coupled to network 36 via communication links 38 and 40. In one embodiment, audio access device 6 and 8 are voice over internet protocol (VOIP) devices and network 36 is a wide area network (WAN), public switched telephone network (PSTN) and/or the internet. In another embodiment, audio access device 6 is a receiving audio device and audio access device 8 is a transmitting audio device that transmits broadcast quality, high fidelity audio data, streaming audio data, and/or audio that accompanies video programming. Communication links 38 and 40 are wireline and/or wireless broadband connections. In an alternative embodiment, audio access devices 6 and 8 are cellular or mobile telephones, links 38 and 40 are wireless mobile telephone channels and network 36 represents a mobile telephone network. Audio access device 6 uses microphone 12 to convert sound, such as music or a person's voice into analog audio input signal 28. Microphone interface 16 converts analog audio input signal 28 into digital audio signal 32 for input into encoder 22 of CODEC 20. Encoder 22 produces encoded audio signal TX for transmission to network 36 via network interface 26 according to embodiments of the present invention. Decoder 24 within CODEC 20 receives encoded audio signal RX from network 36 via network interface 26, and converts encoded audio signal RX into digital audio signal 34. Speaker interface 18 converts digital audio signal 34 into audio signal 30 suitable for driving loudspeaker 14.

[0064] In embodiments of the present invention, where audio access device 6 is a VOIP device, some or all of the components within audio access device 6 can be implemented within a handset. In some embodiments, however, Microphone 12 and loudspeaker 14 are separate units, and microphone interface 16, speaker interface 18, CODEC 20 and network interface 26 are implemented within a personal computer. CODEC 20 can be implemented in either software running on a computer or a dedicated processor, or by dedicated hardware, for example, on an application specific integrated circuit (ASIC). Microphone interface 16 is implemented by an analog-to-digital (A/D) converter, as well as other interface circuitry located within the handset and/or within the computer. Likewise, speaker interface 18 is implemented by a digital-to-analog converter and other interface circuitry located within the handset and/or within the computer. In further embodiments, audio access device 6 can be implemented and partitioned in other ways known in the art.

[0065] In embodiments of the present invention where audio access device 6 is a cellular or mobile telephone, the elements within audio access device 6 are implemented within a cellular handset. CODEC 20 is implemented by software running on a processor within the handset or by dedicated hardware. In further embodiments of the present invention, audio access device may be implemented in other devices such as peer-to-peer wireline and wireless digital communication systems, such as intercoms, and radio handsets. In applications such as consumer audio devices, audio access device may contain a CODEC with only encoder 22 or decoder 24, for example, in a digital microphone system or music playback device. In other embodiments of the present invention, CODEC 20 can be used without microphone 12 and speaker 14, for example, in cellular base stations that access the PSTN.

[0066] Although the embodiments and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

[0067] The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.