Non-uniform quantization of log likelihood ratios
10256846 ยท 2019-04-09
Assignee
Inventors
Cpc classification
H03M13/1111
ELECTRICITY
H03M13/6577
ELECTRICITY
H03M13/45
ELECTRICITY
H04L25/067
ELECTRICITY
H03M7/40
ELECTRICITY
H03M13/6502
ELECTRICITY
International classification
H03M13/45
ELECTRICITY
H03M13/00
ELECTRICITY
G06F11/10
PHYSICS
Abstract
A method of processing a signal by non-uniform quantization of log likelihood ratios is disclosed. A method comprising the steps of: receiving a plurality of bits; calculating a log likelihood ratio, known as a LLR, for each bit; providing a LLR value for each bit based on the calculated LLR; quantizing the LLR values into a plurality of quantization bins, each quantization bin having: a width representative of one or more LLR values; and an index value having a bit length; and associating each bit with the index value that corresponds to its LLR value, wherein the width of each quantization bin is non-uniform. This compresses the LLR values in a more efficient manner, requiring lower memory usage and/or lower bandwidth. A chip for a receiver and a communication system comprising one or more receivers are also disclosed.
Claims
1. A method of processing and decoding data that is represented by a signal carried out by receiver circuitry, the method comprising: first circuitry carrying out the steps of receiving a plurality of bits, calculating a log likelihood ratio, known as a LLR, for each bit, providing a LLR value for each bit based on the calculated LLR, and quantizing the LLR values into a plurality of quantization bins based on at least one of a probability density of LLR values and a range of LLR values, each quantization bin having a width representative of one or more LLR values, and an index value having a bit length; and second circuitry carrying out the steps of associating each bit with the index value that corresponds to its LLR value, and in response generating decoded output data as derived from the signal, wherein the width of each quantization bin is non-uniform.
2. A method as claimed in claim 1, wherein the step of quantizing the LLR values further comprises the step of: determining the width of the plurality of quantization bins based on the probability density of LLR values.
3. A method as claimed in claim 1, wherein the step of quantizing the LLR values further comprises the step of: determining the width of the plurality of quantization bins based on the range of LLR values.
4. A method as claimed in claim 2, wherein the width of the plurality of the quantization bins is adaptively determined based on the calculated LLRs.
5. A method as claimed in claim 1, wherein the step of providing a LLR value for each bit comprises: assigning each bit with an LLR value equal to its calculated LLR.
6. A method as claimed in claim 1, wherein the step of providing a LLR value for each bit further comprises the steps of: quantizing the calculated LLRs into a plurality of LLR bins, each LLR bin having: a uniform width representative of one or more LLRs; and an interim index value having a bit length; and assigning each bit with an LLR value equal to the interim index value that corresponds to its LLR bin.
7. A method as claimed in claim 6, wherein the step of quantizing the calculated LLRs into a plurality of LLR bins comprises uniformly quantizing the calculated LLRs into an integer number of LLR bins represented by an interim index value having a number of bits equal to the binary logarithm of the integer number of LLR bins.
8. A method as claimed in claim 6, wherein the step of associating each bit with the index value that corresponds to its LLR value comprises using a table to convert the interim index value of each LLR value to an index value.
9. A method as claimed in claim 1, wherein the step of quantizing the LLR values into a plurality of quantization bins comprises non-uniformly quantizing the LLR values into an integer number of quantization bins represented by an index value having a number of bits equal to the binary logarithm of the integer number of quantization bins.
10. A method as claimed in claim 1, wherein the bit lengths of the index values are variable between quantization bins.
11. A method as claimed in claim 1, further comprising the steps of: separating the signs and magnitudes of the index values; and applying data compression methods on the magnitudes of the index values.
12. A method as claimed in claim 1, wherein the step of providing a LLR value for each bit further comprising the steps of: identifying M number of modulated bits from a wireless signal, where M is at least 2; pairing the LLR of the M or more modulated bits to form an LLR pair; indexing the LLR pair within an M-dimensional array; and determining a LLR value based on a location of the LLR pair within the array.
13. A method as claimed in claim 1, further comprising the steps of: storing the plurality of bits and their associated index values within a memory module; providing the plurality of bits and their associated index values to a de-interleaver; de-quantizing the index values to extract decompressed LLR values for each bit; and providing the decompressed LLR values and the bits to an decoder for extraction of decoded bits of the signal.
14. A method as claimed in claim 1, wherein the first circuitry includes a demodulator and a quantizer respectively carrying out the steps of calculating a LLR for each bit providing a LLR value for each bit based on the calculated LLR, and wherein the second circuitry includes a deinterleaver and a dequantizer.
15. A receiver responsive to a received signal having a plurality of bits, the receiver comprising: a first circuit configured and arranged to demodulate the received signal by calculating a log likelihood ratio, known as a LLR, for each bit, and providing a LLR value for each bit based on the calculated LLR, and to process the LLR values by quantizing the LLR values into a plurality of quantization bins, each quantization bin having a width representative of one or more LLR values and having an index with a bit length; and other circuitry configured and arranged to associate each bit with the index value that corresponds to its LLR value, wherein the width of each quantization bin is non-uniform.
16. A communication system comprising one or more receivers, at least one of the one or more receivers set forth in claim 15.
17. A receiver as claimed in claim 15, wherein the first circuit includes a demodulator and a quantizer and the other circuitry includes a deinterleaver and a dequantizer.
18. A receiver as claimed in claim 15, wherein the first circuit includes a quantizer configured and arranged to quantize the LLR values and to determine the width of the plurality of quantization bins based on at least one of the probability density of LLR values and a range of LLR values.
Description
(1) Embodiments will be described, by way of example only, with reference to the drawings, in which
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20) It should be noted that the Figures are diagrammatic and not drawn to scale. Relative dimensions and proportions of parts of these Figures have been shown exaggerated or reduced in size, for the sake of clarity and convenience in the drawings. The same reference signs are generally used to refer to corresponding or similar feature in modified and different embodiments.
(21)
(22) In embodiments of the present disclosure, received bits 12 are received by a demodulator 20 that calculates LLRs for each incident bit. As noted, for a perfect signal, the received bits are 0 or 1 depending upon the information conveyed by the signal. However, due to interference and noise effects, the values of the bits are values away from the ideal. Accordingly, log likelihood ratios (LLRs) are calculated for each received bit or signal that provides a value representative of the likelihood that the value of the received bit is either 0 or 1. This information is then used to deinterleave and decode the information from the received bits 12. The LLRs have a value represented by an N bit integer.
(23) Once demodulated, the N bit LLR values 22 are provided to a quantization module 30 where non-uniform quantization is performed on the values 22 to compress the values from N bit to (Nns) bits where N is the number of bits used to represent the original LLR value, (Nns) is the number of bits used to represent the non-uniform quantized LLR values and ns is the number of bits saved. The quantization module 30 may be a separate module or may be integrated into the demodulator 20.
(24) The (Nns) bit non-uniform quantized LLRs are called index values 32. For example, if the quantization module 30 compresses the 5 bit LLR values 22 to 4 bit index values 32, this results in a 20% saving in the amount of memory space required to store the index values 32. Compressing from 5 bits to 3 bits results in a 40% memory saving.
(25) The index values 32 are then stored within memory 40. The values 32 may be stored for a duration according to the interleaving size of the chosen standard, for example 384 ms for the DAB standard. Alternatively, the index values 32 may be provided to an external device for further processing. Once requested by a deinterleaver 50, the index values 32 are provided by the memory 40 to the deinterleaver 50. The deinterleaver 50 then deinterleaves the received bits 12. The index values 32 and the received bits 12 can then be dequantized in a non-uniform dequantization module 60, which may be a separate module or integrated into the deinterleaver 50.
(26) The dequantization module 60 decompresses the index values to obtain LLRs for each received bit 12. The decompressed N bit LLRs 62 are then provided to a FEC encoder 70 for decoding and output of decoded bits 72.
(27)
(28) The width of the plurality of quantization bins are typically determined based on the probability density of the LLR values.
(29)
(30) In
(31)
(32) In
(33) The observations of
(34) Due to this spread between disparate channels, it is possible to undertake a non-uniform quantization of the LLRs to compress the values of the LLRs. In doing so, it is sometimes necessary to look at the histograms of the LLRs and make minimum distortions to the existing histogram. Such non-uniform quantization should not introduce additional quantization noise on the maximum and minimum values, and should not introduce additional quantization noise on the LLRs with lower magnitudes, e.g., from 5 to 1 and from 1 to 5.
(35) In order to preserve the information of the LLR values of lower magnitude, a smaller quantization step is chosen for these values to maintain the accuracy of these values. Conversely, for LLRs with larger magnitudes, a less accurate, i.e. a larger quantization step can be used. For example, the LLRs with maximum and minimum values, e.g., 15 and 15 values in the examples, are kept the same or with minimum distortion. It is then possible to quantize the LLRs directly non-uniformly following the above guidelines. Alternatively, it may be desirable to first generate a uniformly quantized LLRs with N bit representation and compress/quantize it further to represent LLRs with less bits. We choose to show the second approach to also observe how the LLRs are affected after non-uniform quantization/compression.
(36) An example non-uniform quantization/compression from 5 bit to 4 and 3 bit is tabulated in Table 1 below and in
(37) TABLE-US-00001 TABLE 1 5 bit to 4 bit non-uniform quantization conversion table 4 bit non-uniform LLR 4 bit LLR representation 5 bit LLR input output in memory 16 15 0000 15 15 0000 14 12 0001 13 12 0001 12 12 0001 11 12 0001 10 9 0010 9 9 0010 8 9 0010 7 6 0011 6 6 0011 5 4 0100 4 4 0100 3 3 0101 2 2 0110 1 1 0111 0 0 Not represented 1 1 1000 2 2 1001 3 3 1010 4 4 1011 5 4 1011 6 6 1100 7 6 1100 8 9 1101 9 9 1101 10 9 1101 11 12 1110 12 12 1110 13 12 1110 14 12 1110 15 15 1111
(38) It can appreciated that any integer number of bins may be chosen. In such cases, each bin can be represented by an index value having log.sub.2(Number of bins) bits. If the number of bins is not a power of two, and would therefore result in a non-integer number of bits for the index value, then it may paired with other values to be represented by an integer number of bits. This could result in a fractional number of code bits. For example: 10 bins can be coded as a code from 0-9 of which 9 codes will fit into a 32 bits number, giving 3.5 bits per index value.
(39)
(40) More broadly, an N bit input of LLR values uniformly quantized into 2.sup.N bins may be consolidated into (Nns) bit input of quantization with 2.sup.(N-ns) bins, where N is the bit length of the original LLR values, (Nns) is the bit length of the non-uniformly quantized index values and ns is the number of bits saved.
(41) TABLE-US-00002 TABLE 2 5 bit to 3 bit non-uniform quantization conversion table 3 bit non-uniform LLR 3 bit LLR representation 5 bit LLR input output in memory 16 15 000 15 15 000 14 15 000 13 15 000 12 15 000 11 15 000 10 8 001 9 8 001 8 8 001 7 8 001 6 4 010 5 4 010 4 4 010 3 4 010 2 1 011 1 1 011 0 0 Not represented 1 1 100 2 1 100 3 4 101 4 4 101 5 4 101 6 4 101 7 8 110 8 8 110 9 8 110 10 8 110 11 15 111 12 15 111 13 15 111 14 15 111 15 15 111
(42) Further compression of the input LLR values may also be performed, such as from a 5 bit LLR value to 3 bit index values. Table 2, shows conversion of 5 bit input LLR values into 3 bit non-uniform index values.
(43) After generation of non-uniformly quantized/compressed LLR values, storing in the memory and deinterleaving processes should be done for the bits against the index values of the non-uniformly quantized LLRs. Thus, some examples, less memory in a conventional receiver may be needed, or a lower data rate link to a master processing unit may be required in cases of distributed receiver architecture. Returning to
(44)
(45)
(46) Similarly, with a receiver utilizing a 3 bit non-uniform quantization 560, this receiver achieves similar performance as the receiver with 4 bit uniformly quantized LLRs 530. Thus, in some examples, for this specific performance target one may save 40% of the memory cost or reduce the speed of the data link to send LLRs to another block while sacrificing only very little performance.
(47) Similar performance enhancements may be seen in some implementations comparing standard uniform quantization techniques 570 for TU-6 channel LLRs as shown in
(48)
(49) Depending on the channel realizations and characteristics, scaling of the signals and the initial quantization step of high resolution LLRs, the distribution of LLRs may have a different distribution than the ideal cases, of
(50) It is also possible to exploit further the differences in probability of each non-uniformly quantized LLR value, e.g., applying Huffmann compression on non-uniformly quantized LLR values and assigning less bits to represent the LLR values that have higher probability and leading to on average less bits to represent the LLRs. In this manner, the output of the non-uniform quantization/compression block in
(51) In another approach, it is possible to choose to exploit the correlations in successive LLR values. In general, the successive LLRs come from different parts of the encoded bit stream. Thus, the LLRs' signs are uncorrelated. However, due to correlated noise, fading interference or bits coming from the same complex modulated symbols, e.g., bits from the same M-ary Quadrature Amplitude Modulation (M-QAM) symbol, LLRs may have correlated magnitudes. Thus, one example may opt for separating the sign and magnitude of LLRs, and applying conventional data/vector compression methods on the magnitude of LLRs only as in
(52) Another approach may apply non-uniform LLR quantization method to LLRs from the same modulated symbol, e.g., LLRs for the bits from the same M-ary Pulse-amplitude modulation (M-PAM), M-ary Pulse-shift keying M-PSK or M-QAM etc. This allows an M dimensional array of non-uniformly quantized values to be determined. In such an approach it is necessary to consider the joint distributions of the LLRs. In
(53) From reading the present disclosure, other variations and modifications will be apparent to the skilled person. Such variations and modifications may involve equivalent and other features which are already known in the art of, and which may be used instead of, or in addition to, features already described herein. For example, although wireless signals are typically referred to, the skilled person would appreciate the application of the present disclosure to non-wireless signals, particularly those that utilise interleaving.
(54) Although the appended claims are directed to particular combinations of features, it should be understood that the scope of the disclosure of the present invention also includes any novel feature or any novel combination of features disclosed herein either explicitly or implicitly or any generalisation thereof, whether or not it relates to the same invention as presently claimed in any claim and whether or not it mitigates any or all of the same technical problems as does the present invention.
(55) Features which are described in the context of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination. The applicant hereby gives notice that new claims may be formulated to such features and/or combinations of such features during the prosecution of the present application or of any further application derived therefrom.
(56) For the sake of completeness it is also stated that the term comprising does not exclude other elements or steps, the term a or an does not exclude a plurality, a single processor or other unit may fulfil the functions of several means recited in the claims and reference signs in the claims shall not be construed as limiting the scope of the claims.