Methods, encoder and decoder for linear predictive encoding and decoding of sound signals upon transition between frames having different sampling rates
09852741 · 2017-12-26
Assignee
Inventors
Cpc classification
G10L19/06
PHYSICS
G10L19/12
PHYSICS
G10L19/167
PHYSICS
G10L19/173
PHYSICS
International classification
G10L19/06
PHYSICS
G10L19/12
PHYSICS
G10L19/24
PHYSICS
Abstract
Methods, an encoder and a decoder are configured for transition between frames with different internal sampling rates. Linear predictive (LP) filter parameters are converted from a sampling rate S1 to a sampling rate S2. A power spectrum of a LP synthesis filter is computed, at the sampling rate S1, using the LP filter parameters. The power spectrum of the LP synthesis filter is modified to convert it from the sampling rate S1 to the sampling rate S2. The modified power spectrum of the LP synthesis filter is inverse transformed to determine autocorrelations of the LP synthesis filter at the sampling rate S2. The autocorrelations are used to compute the LP filter parameters at the sampling rate S2.
Claims
1. A method for encoding a sound signal, comprising: producing, in response to the sound signal, parameters for encoding the sound signal during successive sound signal processing frames, wherein the sound signal encoding parameters include linear predictive (LP) filter parameters, wherein producing the LP filter parameters comprises, when switching from a first one of the frames using an internal sampling rate S1 to a second one of the frames using an internal sampling rate S2, converting the LP filter parameters from the first frame from the internal sampling rate S1 to a the internal sampling rate S2, the and wherein converting the LP filter parameters from the first frame comprises: computing, at the internal sampling rate SI, a power spectrum of a LP synthesis filter using the LP filter parameters; modifying the power spectrum of the LP synthesis filter to convert it from the internal sampling rate S1 to the internal sampling rate S2; inverse transforming the modified power spectrum of the LP synthesis filter to determine autocorrelations of the LP synthesis filter at the internal sampling rate S2; and using the autocorrelations to compute the LP filter parameters at the internal sampling rate S2; and encoding the sound signal encoding parameters into a bitstream; and wherein modifying the power spectrum of the LP synthesis filter to convert it from the internal sampling rate S1 to the internal sampling rate S2 comprises: if S1 is less than S2, extending the power spectrum of the LP synthesis filter based on a ratio between S1 and S2; if S1 is larger than S2, truncating the power spectrum of the LP synthesis filter based on the ratio between S1 and S2.
2. The method as recited in claim 1, wherein the frames are divided into subframes, and wherein the method comprises computing LP filter parameters in each subframe of a current frame by interpolating LP filter parameters of the current frame at the internal sampling rate S2 with LP filter parameters of a past frame converted from the internal sampling rate S1 to the internal sampling rate S2.
3. The method as recited in claim 2, comprising forcing the current frame to an encoding mode that does not use a history of an adaptive codebook.
4. The method as recited in claim 2, comprising forcing a LP-parameter quantizer to use a non-predictive quantization method in the current frame.
5. The method as recited in claim 1, wherein the power spectrum of the LP synthesis filter is a discrete power spectrum.
6. The method as recited in claim 1, comprising: computing the power spectrum of the LP synthesis filter at K samples; extending the power spectrum of the LP synthesis filter to K(S2/S1) samples when the internal sampling rate S1 is less than the internal sampling rate S2; and truncating the power spectrum of the LP synthesis filter to K(S2/S1) samples when the sampling rate S1 is greater than the sampling rate S2.
7. The method as recited in claim 1, comprising computing the power spectrum of the LP synthesis filter as an energy of a frequency response of the LP synthesis filter.
8. The method as recited in claim 1, comprising inverse transforming the modified power spectrum of the LP synthesis filter by using an inverse discrete Fourier Transform.
9. The method as recited in claim 1, comprising searching a fixed codebook using a reduced number of iterations.
10. A method for decoding a sound signal, comprising: receiving a bitstream including sound signal encoding parameters in successive sound signal processing frames, wherein the sound signal encoding parameters include linear predictive (LP) filter parameters: decoding from the bitstream the sound signal encoding parameters including the LP filter parameters during the successive sound signal processing frames, and producing from the decoded sound signal encoding parameters an LP synthesis filter excitation signal, wherein decoding the LP filter parameters comprises, when switching from a first one of the frames using an internal sampling rate S1 to a second one of the frames using an internal sampling rate S2, converting LP filter parameters from the first frame from the internal sampling rate S1 to the internal sampling rate S2, and wherein converting the LP filter parameters from the first frame comprises: computing, at the internal sampling rate SI, a power spectrum of a LP synthesis filter using the received LP filter parameters; modifying the power spectrum of the LP synthesis filter to convert it from the internal sampling rate S1 to the internal sampling rate S2; inverse transforming the modified power spectrum of the LP synthesis filter to determine autocorrelations of the LP synthesis filter at the internal sampling rate S2; and using the autocorrelations to compute the LP filter parameters at the internal sampling rate S2; synthesizing the sound signal using LP synthesis filtering in response to the decoded LP filter parameters and the LP synthesis filter excitation signal; and wherein modifying the power spectrum of the LP synthesis filter to convert it from the internal sampling rate S1 to the internal sampling rate S2 comprises: if S1 is less than S2, extending the power spectrum of the LP synthesis filter based on a ratio between S1 and S2; if S1 is larger than S2, truncating the power spectrum of the LP synthesis filter based on the ratio between S1 and S2.
11. The method as recited in claim 10, wherein the frames are divided into subframes, and wherein the method comprises computing LP filter parameters in each subframe of a current frame by interpolating LP filter parameters of the current frame at the internal sampling rate S2 with LP filter parameters of a past frame converted from the internal sampling rate S1 to the internal sampling rate S2.
12. The method as recited in claim 10, wherein the power spectrum of the LP synthesis filter is a discrete power spectrum.
13. The method as recited in claim 10, comprising: computing the power spectrum of the LP synthesis filter at K samples; extending the power spectrum of the LP synthesis filter to K(S2/S1) samples when the internal sampling rate S1 is less than the internal sampling rate S2; and truncating the power spectrum of the LP synthesis filter to K(S2/S1) samples when the internal sampling rate S1 is greater than the internal sampling rate S2.
14. The method as recited in claim 10, comprising computing the power spectrum of the LP synthesis filter as an energy of a frequency response of the LP synthesis filter.
15. The method as recited in claim 10, comprising inverse transforming the modified power spectrum of the LP synthesis filter by using an inverse discrete Fourier Transform.
16. The method as recited in claim 10, wherein a post filtering is skipped to reduce decoding complexity.
17. A device for encoding a sound signal, comprising: at least one processor; and a memory coupled to the processor and comprising non-transitory instructions that when executed cause the processor to: produce, in response to the sound signal, parameters for encoding the sound signal during successive sound signal processing frames, wherein (a) the sound signal encoding parameters include linear predictive (LP) filter parameters, (b) for producing the LP filter parameters when switching from a first one of the frames using an internal sampling rate S1 to a second one of the frames using an internal sampling rate S2, the processor is configured to convert the LP filter parameters from the first frame from the internal sampling rate S1 to the internal sampling rate S2, and (c) for converting the LP filter parameters from the first frame, the processor is configured to: compute, at the internal sampling rate S1, a power spectrum of a LP synthesis filter using the LP filter parameters, modify the power spectrum of the LP synthesis filter to convert it from the internal sampling rate S1 to the internal sampling rate S2, inverse transform the modified power spectrum of the LP synthesis filter to determine autocorrelations of the LP synthesis filter at the internal sampling rate S2, use the autocorrelations to compute the LP filter parameters at the internal sampling rate S2, and encode the sound signal encoding parameters into a bitstream; and wherein the processor is configured to: extend the power spectrum of the LP synthesis filter based on a ratio between S1 and S2 if S1 is less than S2; and truncate the power spectrum of the LP synthesis filter based on the ratio between S1 and S2 if S1 is larger than S2.
18. The device as recited in claim 17, wherein the frames are divided into subframes, and wherein the processor is configured to compute LP filter parameters in each subframe of a current frame by interpolating LP filter parameters of the current frame at the internal sampling rate S2 with LP filter parameters of a past frame converted from the internal sampling rate S1 to the internal sampling rate S2.
19. The device as recited in claim 17, wherein the processor is configured to: compute the power spectrum of the LP synthesis filter at K samples; extend the power spectrum of the LP synthesis filter to K(S2/S1) samples when the internal sampling rate S1 is less than the internal sampling rate S2; and truncate the power spectrum of the LP synthesis filter to K(S2/S1) samples when the internal sampling rate S1 is greater than the internal sampling rate S2.
20. The device as recited in claim 17, wherein the processor is configured to compute the power spectrum of the LP synthesis filter as an energy of a frequency response of the LP synthesis filter.
21. The device as recited in claim 17, wherein the processor is configured to inverse transform the modified power spectrum of the LP synthesis filter by using an inverse discrete Fourier Transform.
22. A device for decoding a sound signal, comprising: at least one processor; and a memory coupled to the processor and comprising non-transitory instructions that when executed cause the processor to: receive a bitstream including sound signal encoding parameters in successive sound signal processing frames, wherein the sound signal encoding parameters include linear predictive (LP) filter parameters; decode from the bitstream the sound signal encoding parameters including the LP filter parameters during the successive sound signal processing frames, and produce from the decoded sound signal encoding parameters an LP synthesis filter excitation signal, wherein (a) for decoding the LP filter parameters when switching from a first one of the frames using an internal sampling rate S1 to a second one of the frames using an internal sampling rate S2, the processor is configured to convert the LP filter parameters from the first frame from the internal sampling rate S1 to the internal sampling rate S2, and (b) for converting the LP filter parameters from the first frame, the processor is configured to: compute, at the internal sampling rate SI, a power spectrum of a LP synthesis filter using the received LP filter parameters, modify the power spectrum of the LP synthesis filter to convert it from the internal sampling rate S1 to the internal sampling rate S2, inverse transform the modified power spectrum of the LP synthesis filter to determine autocorrelations of the LP synthesis filter at the internal sampling rate S2, and use the autocorrelations to compute the LP filter parameters at the internal sampling rate S2, and synthesize the sound signal using LP synthesis filtering in response to the decoded LP filter parameters and the LP synthesis filter excitation signal, and wherein the processor is configured to: extend the power spectrum of the LP synthesis filter based on a ratio between S1 and S2 if S1 is less than S2; and truncate the power spectrum of the LP synthesis filter based on the ratio between S1 and S2 if S1 is larger than S2.
23. The device as recited in claim 22, wherein the frames are divided into subframes, and wherein the processor is configured to compute LP filter parameters in each subframe of a current frame by interpolating LP filter parameters of the current frame at the internal sampling rate S2 with LP filter parameters of a past frame converted from the internal sampling rate S1 to the internal sampling rate S2.
24. The device as recited in claim 22, wherein the processor is configured to: compute the power spectrum of the LP synthesis filter at K samples; extend the power spectrum of the LP synthesis filter to K(S2/S1) samples when the internal sampling rate S1 is less than the internal sampling rate S2; and truncate the power spectrum of the LP synthesis filter to K(S2/S1) samples when the internal sampling rate S1 is greater than the internal sampling rate S2.
25. The device as recited in claim 22, wherein the processor is configured to compute the power spectrum of the LP synthesis filter as an energy of a frequency response of the LP synthesis filter.
26. The device as recited in claim 22, wherein the processor is configured to inverse transfoiiii the modified power spectrum of the LP synthesis filter by using an inverse discrete Fourier Transform.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) In the appended drawings:
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION
(7) The non-restrictive illustrative embodiment of the present disclosure is concerned with a method and a device for efficient switching, in an LP-based codec, between frames using different internal sampling rates. The switching method and device can be used with any sound signals, including speech and audio signals. The switching between 16 kHz and 12.8 kHz internal sampling rates is given by way of example, however, the switching method and device can also be applied to other sampling rates.
(8)
(9) Still referring to
(10)
(11) Presently, the most widespread speech coding techniques are based on Linear Prediction (LP), in particular CELP. In LP-based coding, the synthesized digital sound signal 113 is produced by filtering an excitation 214 through a LP synthesis filter 216 having a transfer function 1/A(z). In CELP, the excitation 214 is typically composed of two parts: a first-stage, adaptive-codebook contribution 222 selected from an adaptive codebook 218 and amplified by an adaptive-codebook gain g.sub.p 226 and a second-stage, fixed-codebook contribution 224 selected from a fixed codebook 220 and amplified by a fixed-codebook gain g.sub.c 228. Generally speaking, the adaptive codebook contribution 222 models the periodic part of the excitation and the fixed codebook contribution 214 is added to model the evolution of the sound signal.
(12) The sound signal is processed by frames of typically 20 ms and the LP filter parameters are transmitted once per frame. In CELP, the frame is further divided in several subframes to encode the excitation. The subframe length is typically 5 ms.
(13) CELP uses a principle called Analysis-by-Synthesis where possible decoder outputs are tried (synthesized) already during the coding process at the encoder 106 and then compared to the original digital sound signal 105. The encoder 106 thus includes elements similar to those of the decoder 110. These elements includes an adaptive codebook contribution 250 selected from an adaptive codebook 242 that supplies a past excitation signal v(n) convolved with the impulse response of a weighted synthesis filter H(z) (see 238) (cascade of the LP synthesis filter 1/A(z) and the perceptual weighting filter W(z)), the result y.sub.1(n) of which is amplified by an adaptive-codebook gain g.sub.p 240. Also included is a fixed codebook contribution 252 selected from a fixed codebook 244 that supplies an innovative codevector c.sub.k(n) convolved with the impulse response of the weighted synthesis filter H(z) (see 246), the result y.sub.2(n) of which is amplified by a fixed codebook gain g.sub.c 248.
(14) The encoder 106 also comprises a perceptual weighting filter W(z) 233 and a provider 234 of a zero-input response of the cascade (H(z)) of the LP synthesis filter 1/A(z) and the perceptual weighting filter W(z). Subtractors 236, 254 and 256 respectively subtract the zero-input response, the adaptive codebook contribution 250 and the fixed codebook contribution 252 from the original digital sound signal 105 filtered by the perceptual weighting filter 233 to provide a mean-squared error 232 between the original digital sound signal 105 and the synthesized digital sound signal 113.
(15) The codebook search minimizes the mean-squared error 232 between the original digital sound signal 105 and the synthesized digital sound signal 113 in a perceptually weighted domain, where discrete time index n=0, 1, . . . , N−1, and N is the length of the subframe. The perceptual weighting filter W(z) exploits the frequency masking effect and typically is derived from a LP filter A(z).
(16) An example of the perceptual weighting filter W(z) for WB (wideband, bandwidth of 50-7000 Hz) signals can be found in Reference [1].
(17) Since the memory of the LP synthesis filter 1/A(z) and the weighting filter W(z) is independent from the searched codevectors, this memory can be subtracted from the original digital sound signal 105 prior to the fixed codebook search. Filtering of the candidate codevectors can then be done by means of a convolution with the impulse response of the cascade of the filters 1/A(z) and W(z), represented by H(z) in
(18) The digital bit stream 111 transmitted from the encoder 106 to the decoder 110 contains typically the following parameters 107: quantized parameters of the LP filter A(z), indices of the adaptive codebook 242 and of the fixed codebook 244, and the gains g.sub.p 240 and g.sub.c 248 of the adaptive codebook 242 and of the fixed codebook 244.
(19) Converting LP Filter Parameters When Switching at Frame Boundaries with Different Sampling Rates
(20) In LP-based coding the LP filter A(z) is determined once per frame, and then interpolated for each subframe.
SF1=0.75 F0+0.25 F1;
SF2=0.5 F0+0.5 F1;
SF3=0.25 F0+0.75 F1
SF4=F1.
(21) Other interpolation examples may alternatively be used depending on the LP analysis window shape, length and position. In another embodiment, the coder switches between 12.8 kHz and 16 kHz internal sampling rates, where 4 subframes per frame are used at 12.8 kHz and 5 subframes per frame are used at 16 kHz, and where the LP parameters are also quantized in the middle of the present frame (Fm). In this other embodiment, LP parameter interpolation for a 12.8 kHz frame is given by:
SF1=0.5 F0+0.5 Fm;
SF2=Fm;
SF3=0.5 Fm+0.5 F1;
SF4=F1.
(22) For a 16 kHz sampling, the interpolation is given by:
SF1=0.55 F0+0.45 Fm;
SF2=0.15 F0+0.85 Fm;
SF3=0.75 Fm+0.25 F1;
SF4=0.35 Fm+0.65 F1;
SF5=F1
(23) LP analysis results in computing the parameters of the LP synthesis filter using:
(24)
(25) where α.sub.i, i=1, . . . , M, are LP filter parameters and M is the filter order.
(26) The LP filter parameters are transformed to another domain for quantization and interpolation purposes. Other LP parameter representations commonly used are reflection coefficients, log-area ratios, immitance spectrum pairs (used in AMR-WB; Reference [1]), and line spectrum pairs, which are also called line spectrum frequencies (LSF). In this illustrative embodiment, the line spectrum frequency representation is used. An example of a method that can be used to convert the LP parameters to LSF parameters and vice versa can be found in Reference [2]. The interpolation example in the previous paragraph is applied to the LSF parameters, which can be in the frequency domain in the range between 0 and Fs/2 (where Fs is the sampling frequency), or in the scaled frequency domain between 0 and π, or in the cosine domain (cosine of scaled frequency).
(27) As described above, different internal sampling rates may be used at different bit rates to improve quality in multi-rate LP-based coding. In this illustrative embodiment, a multi-rate CELP wideband coder is used where an internal sampling rate of 12.8 kHz is used at lower bit rates and an internal sampling rate of 16 kHz at higher bit rates. At a 12.8 kHz sampling rate, the LSFs cover the bandwidth from 0 to 6.4 kHz, while at a 16 kHz sampling rate they cover the range from 0 to 8 kHz. When switching the bit rate between two frames where the internal sampling rate is different, some issues are addressed to insure seamless switching. These issues include the interpolation of LP filter parameters and the memories of the synthesis filter and the adaptive codebook, which are at different sampling rates.
(28) The present disclosure introduces a method for efficient interpolation of LP parameters between two frames at different internal sampling rates. By way of example, the switching between 12.8 kHz and 16 kHz sampling rates is considered. The disclosed techniques are however not limited to these particular sampling rates and may apply to other internal sampling rates.
(29) Let's assume that the encoder is switching from a frame F1 with internal sampling rate S1 to a frame F2 with internal sampling rate S2. The LP parameters in the first frame are denoted LSF1.sub.S1 and the LP parameters at the second frame are denoted LSF2.sub.S2. In order to update the LP parameters in each subframe of frame F2, the LP parameters LSF1 and LSF2 are interpolated. In order to perform the interpolation, the filters have to be set at the same sampling rate. This requires performing LP analysis of frame F1 at sampling rate S2. To avoid transmitting the LP filter twice at the two sampling rates in frame F1, the LP analysis at sampling rate S2 can be performed on the past synthesis signal which is available at both encoder and decoder. This approach involves re-sampling the past synthesis signal from rate S1 to rate S2, and performing complete LP analysis, this operation being repeated at the decoder, which is usually computationally demanding.
(30) Alternative method and devices are disclosed herein for converting LP synthesis filter parameters LSF1 from sampling rate S1 to sampling rate S2 without the need to re-sample the past synthesis and perform complete LP analysis. The method, used at encoding and/or at decoding, comprises computing the power spectrum of the LP synthesis filter at rate S1; modifying the power spectrum to convert it from rate S1 to rate S2; converting the modified power spectrum back to the time domain to obtain the filter autocorrelation at rate S2; and finally use the autocorrelation to compute LP filter parameters at rate S2.
(31) In at least some embodiments, modifying the power spectrum to convert it from rate S1 to rate S2 comprises the following operations: If S1 is larger than S2, modifying the power spectrum comprises truncating the K-sample power spectrum down to K(S2/S1) samples, that is, removing K(S1−S2)/S1 samples. On the other hand, if S1 is smaller than S2, then modifying the power spectrum comprises extending the K-sample power spectrum up to K(S2/S1) samples, that is, adding K(S2−S1)/S1 samples.
(32) Computing the LP filter at rate S2 from the autocorrelations can be done using the Levinson-Durbin algorithm (see Reference [1]). Once the LP filter is converted to rate S2, the LP filter parameters are transformed to the interpolation domain, which is an LSF domain in this illustrative embodiment.
(33) The procedure described above is summarized in
(34) Sequence 300 of operations shows that a simple method for the computation of the power spectrum of the LP synthesis filter 1/A(z) is to evaluate the frequency response of the filter at K frequencies from 0 to 2π.
(35) The frequency response of the synthesis filter is given by
(36)
(37) and the power spectrum of the synthesis filter is calculated as an energy of the frequency response of the synthesis filter, given by
(38)
(39) Initially, the LP filter is at a rate equal to S1 (operation 310). A K-sample (i.e. discrete) power spectrum of the LP synthesis filter is computed (operation 320) by sampling the frequency range from 0 to 2π. That is
(40)
(41) Note that it is possible to reduce operational complexity by computing P(k) only for k=0, . . . , K/2 since the power spectrum from π to 2π is a mirror of that from 0 to π.
(42) A test (operation 330) determines which of the following cases apply. In a first case, the sampling rate S1 is larger than the sampling rate S2, and the power spectrum for frame F1 is truncated (operation 340) such that the new number of samples is K(S2/S1).
(43) In more details, when S1 is larger than S2, the length of the truncated power spectrum is K.sub.2=K(S2/S1) samples. Since the power spectrum is truncated, it is computed from k=0, . . . , K.sub.2/2. Since the power spectrum is symmetric around K.sub.2/2, then it is assumed that
P(K.sub.2/2+k)=P(K.sub.2/2−k), from k=1, . . . , K.sub.2/2−1
(44) The Fourier Transform of the autocorrelations of a signal gives the power spectrum of that signal. Thus, applying inverse Fourier Transform to the truncated power spectrum results in the autocorrelations of the impulse response of the synthesis filter at sampling rate S2.
(45) The Inverse Discrete Fourier Transform (IDFT) of the truncated power spectrum is given by
(46)
(47) Since the filter order is M, then the IDFT may be computed only for i=0, . . . , M. Further, since the power spectrum is real and symmetric, then the IDFT of the power spectrum is also real and symmetric. Given the symmetry of the power spectrum, and that only M+1 correlations are needed, the inverse transform of the power spectrum can be given as
(48)
(49) That is
(50)
(51) After the autocorrelations are computed at sampling rate S2, the Levinson-Durbin algorithm (see Reference [1]) can be used to compute the parameters of the LP filter at sampling rate S2. Then, the LP filter parameters are transformed to the LSF domain for interpolation with the LSFs of frame F2 in order to obtain LP parameters at each subframe.
(52) In the illustrative example where the coder encodes a wideband signal and is switching from a frame with an internal sampling rate S1=16 kHz to a frame with internal sampling rate S2=12.8 kHz, assuming that K=100, the length of the truncated power spectrum is K.sub.2=100(12800/16000)=80 samples. The power spectrum is computed for 41 samples using Equation (4), and then the autocorrelations are computed using Equation (7) with K.sub.2=80.
(53) In a second case, when the test (operation 330) determines that S1 is smaller than S2, the length of the extended power spectrum is K.sub.2=K(S2/S1) samples (operation 350). After computing the power spectrum from k=0, . . . , K/2, the power spectrum is extended to K.sub.2/2. Since there is no original spectral content between K/2 and K.sub.2/2, extending the power spectrum can be done by inserting a number of samples up to K.sub.2/2 using very low sample values. A simple approach is to repeat the sample at K/2 up to K.sub.2/2. Since the power spectrum is symmetric around K.sub.2/2 then it is assumed that
P(K.sub.2/2+k)=P(K.sub.2/2−k), from k=1, . . . , K.sub.2/2−1
(54) In either cases, the inverse DFT is then computed as in Equation (6) to obtain the autocorrelations at sampling rate S2 (operation 360) and the Levinson-Durbin algorithm (see Reference [1]) is used to compute the LP filter parameters at sampling rate S2 (operation 370). Then filter parameters are transformed to the LSF domain for interpolation with the LSFs of frame F2 in order to obtain LP parameters at each subframe.
(55) Again, let's take the illustrative example where the coder is switching from a frame with an internal sampling rate S1=12.8 kHz to a frame with internal sampling rate S2=16 kHz, and let's assume that K=80. The length of the extended power spectrum is K.sub.2=80(16000/12800)=100 samples. The power spectrum is computed for 51 samples using Equation (4), and then the autocorrelations are computed using Equation (7) with K.sub.2=100.
(56) Note that other methods can be used to compute the power spectrum of the LP synthesis filter or the inverse DFT of the power spectrum without departing from the spirit of the present disclosure.
(57) Note that in this illustrative embodiment converting the LP filter parameters between different internal sampling rates is applied to the quantized LP parameters, in order to determine the interpolated synthesis filter parameters in each subframe, and this is repeated at the decoder. It is noted that the weighting filter uses unquantized LP filter parameters, but it was found sufficient to interpolate between the unquantized filter parameters in new frame F2 and sampling-converted quantized LP parameters from past frame F1 in order to determine the parameters of the weighting filter in each subframe. This avoids the need to apply LP filter sampling conversion on the unquantized LP filter parameters as well.
(58) Other Considerations When Switching at Frame Boundaries with Different Sampling Rates
(59) Another issue to be considered when switching between frames with different internal sampling rates is the content of the adaptive codebook, which usually contains the past excitation signal. If the new frame has an internal sampling rate S2 and the previous frame has an internal sampling rate S1, then the content of the adaptive codebook is re-sampled from rate S1 to rate S2, and this is performed at both the encoder and the decoder.
(60) In order to reduce the complexity, in this disclosure, the new frame F2 is forced to use a transient encoding mode which is independent of the past excitation history and thus does not use the history of the adaptive codebook. An example of transient mode encoding can be found in PCT patent application WO 2008/049221 A1 “Method and device for coding transition frames in speech signals”, the disclosure of which is incorporated by reference herein.
(61) Another consideration when switching at frame boundaries with different sampling rates is the memory of the predictive quantizers. As an example, LP-parameter quantizers usually use predictive quantization, which may not work properly when the parameters are at different sampling rates. In order to reduce switching artefacts, the LP-parameter quantizer may be forced into a non-predictive coding mode when switching between different sampling rates.
(62) A further consideration is the memory of the synthesis filter, which may be resampled when switching between frames with different sampling rates.
(63) Finally, the additional complexity that arises from converting LP filter parameters when switching between frames with different internal sampling rates may be compensated by modifying parts of the encoding or decoding processing. For example, in order not to increase the encoder complexity, the fixed codebook search may be modified by lowering the number of iterations in the first subframe of the frame (see Reference [1] for an example of fixed codebook search).
(64) Additionally, in order not to increase the decoder complexity, certain post-processing can be skipped. For example, in this illustrative embodiment, a post-processing technique as described in U.S. Pat. No. 7,529,660 “Method and device for frequency-selective pitch enhancement of synthesized speech”, the disclosure of which is incorporated by reference herein, may be used. This post-filtering is skipped in the first frame after switching to a different internal sampling rate (skipping this post-filtering also overcomes the need of past synthesis utilized in the post-filter).
(65) Further, other parameters that depend on the sampling rate may be scaled accordingly. For example, the past pitch delay used for decoder classifier and frame erasure concealment may be scaled by the factor S2/S1.
(66)
(67) An audio input 402 is present in the device 400 when used as an encoder 106. The audio input 402 may include for example a microphone or an interface connectable to a microphone. The audio input 402 may include the microphone 102 and the A/D converter 104 and produce the original analog sound signal 103 and/or the original digital sound signal 105. Alternatively, the audio input 402 may receive the original digital sound signal 105. Likewise, an encoded output 404 is present when the device 400 is used as an encoder 106 and is configured to forward the encoding parameters 107 or the digital bit stream 111 containing the parameters 107, including the LP filter parameters, to a remote decoder via a communication link, for example via the communication channel 101, or toward a further memory (not shown) for storage. Non-limiting implementation examples of the encoded output 404 comprise a radio interface of a mobile terminal, a physical interface such as for example a universal serial bus (USB) port of a portable media player, and the like.
(68) An encoded input 403 and an audio output 405 are both present in the device 400 when used as a decoder 110. The encoded input 403 may be constructed to receive the encoding parameters 107 or the digital bit stream 111 containing the parameters 107, including the LP filter parameters from an encoded output 404 of an encoder 106. When the device 400 includes both the encoder 106 and the decoder 110, the encoded output 404 and the encoded input 403 may form a common communication module. The audio output 405 may comprise the D/A converter 115 and the loudspeaker unit 116. Alternatively, the audio output 405 may comprise an interface connectable to an audio player, to a loudspeaker, to a recording device, and the like.
(69) The audio input 402 or the encoded input 403 may also receive signals from a storage device (not shown). In the same manner, the encoded output 404 and the audio output 405 may supply the output signal to a storage device (not shown) for recording.
(70) The audio input 402, the encoded input 403, the encoded output 404 and the audio output 405 are all operatively connected to the processor 406.
(71) Those of ordinary skill in the art will realize that the description of the methods, encoder and decoder for linear predictive encoding and decoding of sound signals are illustrative only and are not intended to be in any way limiting. Other embodiments will readily suggest themselves to such persons with ordinary skill in the art having the benefit of the present disclosure. Furthermore, the disclosed methods, encoder and decoder may be customized to offer valuable solutions to existing needs and problems of switching linear prediction based codecs between two bit rates with different sampling rates.
(72) In the interest of clarity, not all of the routine features of the implementations of methods, encoder and decoder are shown and described. It will, of course, be appreciated that in the development of any such actual implementation of the methods, encoder and decoder, numerous implementation-specific decisions may need to be made in order to achieve the developer's specific goals, such as compliance with application-, system-, network- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the field of sound coding having the benefit of the present disclosure.
(73) In accordance with the present disclosure, the components, process operations, and/or data structures described herein may be implemented using various types of operating systems, computing platforms, network devices, computer programs, and/or general purpose machines. In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature, such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used. Where a method comprising a series of operations is implemented by a computer or a machine and those operations may be stored as a series of instructions readable by the machine, they may be stored on a tangible medium.
(74) Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein.
(75) Although the present disclosure has been described hereinabove by way of non-restrictive, illustrative embodiments thereof, these embodiments may be modified at will within the scope of the appended claims without departing from the spirit and nature of the present disclosure.
REFERENCES
(76) The following references are incorporated by reference herein. [1] 3GPP Technical Specification 26.190, “Adaptive Multi-Rate - Wideband (AMR-WB) speech codec; Transcoding functions,” July 2005; http://www.3gpp.org. [2] ITU-T Recommendation G.729 “Coding of speech at 8 kbit/s using conjugate-structure algebraic-code-excited linear prediction (CS-ACELP)”, January 2007.