System and method for decoding data
12451990 ยท 2025-10-21
Assignee
Inventors
Cpc classification
H03M13/3944
ELECTRICITY
H03M13/1102
ELECTRICITY
H03M13/451
ELECTRICITY
H03M13/3707
ELECTRICITY
H03M13/2975
ELECTRICITY
H03M13/09
ELECTRICITY
H03M13/6511
ELECTRICITY
H04L1/0054
ELECTRICITY
International classification
Abstract
A method for decoding data comprises receiving a sequence of symbols from a data sender over a noisy data channel. At a first decoder, a first search for a candidate error pattern is performed, within a search region, among a plurality of candidate error patterns, and an indication of a failure of the first search is output to a second decoder when no candidate error pattern is found within the search region. At the second decoder, a second search is performed, in parallel with the first search, for the candidate error pattern by evaluating the candidate error patterns for codebook membership based on the sequence of symbols, one or more of the candidate error patterns being skipped from the second search based on the indication of the failure of the first search. The sequence of symbols is decoded based on an outcome of the first search and the second search.
Claims
1. A method for decoding data, the method comprising: at a data receiver comprising at least one first decoder and at least one second decoder configured to run in parallel with the first decoder: receiving a sequence of symbols from a data sender over a noisy data channel; at the at least one first decoder: performing, within a search region, a first search for a candidate error pattern among a plurality of candidate error patterns; and outputting, to the at least one second decoder, an indication of a failure of the first search when no candidate error pattern is found within the search region; at the at least one second decoder: performing, in parallel with the first search, a second search for the candidate error pattern by evaluating the plurality of candidate error patterns for codebook membership based on the sequence of symbols, one or more of the plurality of candidate error patterns being skipped from the second search based on the indication of the failure of the first search; and decoding the sequence of symbols based on an outcome of the first search and the second search.
2. The method of claim 1, wherein the at least one first decoder implements a Sphere Decoding (SD) technique and the at least one second decoder implements a Guessing Random Additive Noise Decoding (GRAND) technique.
3. The method of claim 2, wherein the at least one first decoder implements one of multiple tree search SD (MSD), SD with fixed lower bound, list SD, stack SD, and cyclic redundancy check (CRC)-aided SD.
4. The method of claim 3, wherein the at least one first decoder implements an efficient multiple tree search SD (EMSD) decoding technique.
5. The method of claim 2, wherein the at least one second decoder implements one of soft GRAND (SGRAND), ordered reliability bits GRAND (ORBGRAND), ORBGRAND, GRAND with abandonment (GRANDAB), GRAND with symbol reliability information (SRGGRAND), GRAND Markov Order (GRAND-Mo), and List-GRAND.
6. The method of claim 2, wherein the search region is defined by a radius, further wherein performing the first search for the candidate error pattern comprises progressively expanding the radius of the search region until the candidate error pattern is found within the search region.
7. The method of claim 1, wherein receiving the sequence of symbols comprises receiving a code having a triangular generator matrix.
8. The method of claim 7, wherein receiving the sequence of symbols comprises receiving one of a polar code and a Read-Muller (RM) code.
9. The method of claim 1, wherein receiving the sequence of symbols comprises receiving a Bose-Chaudhuri-Hocquenghem (BCH) code.
10. A data receiver comprising: a receiving unit configured for receiving a sequence of symbols from a data sender over a noisy data channel; a decoding unit comprising at least one first decoder and at least one second decoder configured to run in parallel with the first decoder, the at least one first decoder configured for: performing, within a search region, a first search for a candidate error pattern among a plurality of candidate error patterns; and outputting, to at least one second decoder, an indication of a failure of the first search when no candidate error pattern is found within the search region; and the at least one second decoder configured for: performing, in parallel with the first search, a second search for the candidate error pattern by evaluating the plurality of candidate error patterns for codebook membership based on the sequence of symbols, one or more of the plurality of candidate error patterns being skipped from the second search based on the indication of the failure of the first search; and the decoding unit configured for decoding the sequence of symbols based on an outcome of the first search and the second search.
11. The data receiver of claim 10, wherein the at least one first decoder implements a Sphere Decoding (SD) technique and the at least one second decoder implements a Guessing Random Additive Noise Decoding (GRAND) technique.
12. The data receiver of claim 11, wherein the at least one first decoder implements one of multiple tree search SD (MSD), SD with fixed lower bound, list SD, stack SD, and cyclic redundancy check (CRC)-aided SD.
13. The data receiver of claim 12, wherein the at least one first decoder implements an efficient multiple tree search SD (EMSD) decoding technique.
14. The data receiver of claim 11, wherein the at least one second decoder implements one of soft GRAND (SGRAND), ordered reliability bits GRAND (ORBGRAND), ORBGRAND, GRAND with abandonment (GRANDAB), GRAND with symbol reliability information (SRGGRAND), GRAND Markov Order (GRAND-Mo), and List-GRAND.
15. The data receiver of claim 11, wherein the search region is defined by a radius, further wherein the at least one first decoder is configured for performing the first search for the candidate error pattern comprising progressively expanding the radius of the search region until the candidate error pattern is found within the search region.
16. The data receiver of claim 10, wherein the receiving unit is configured for receiving the sequence of symbols comprising receiving a code having a triangular generator matrix.
17. The data receiver of claim 16, wherein the receiving unit is configured for receiving the sequence of symbols comprising receiving one of a polar code and a Read-Muller (RM) code.
18. The data receiver of claim 10, wherein the receiving unit is configured for receiving the sequence of symbols comprising receiving a Bose-Chaudhuri-Hocquenghem (BCH) code.
19. The data receiver of claim 10, further comprising an output unit configured for receiving a decoded sequence of symbols from the decoding unit and for transmitting the decoded sequence of symbols to an external device.
20. A non-transitory computer readable medium having stored thereon program code executable by at least one processor for: receiving a sequence of symbols over a noisy data channel; performing, within a search region, a first search for a candidate error pattern among a plurality of candidate error patterns; outputting an indication of a failure of the first search when no candidate error pattern is found within the search region; performing, in parallel with the first search, a second search for the candidate error pattern by evaluating the plurality of candidate error patterns for codebook membership based on the sequence of symbols, one or more of the plurality of candidate error patterns being skipped from the second search based on the indication of the failure of the first search; and decoding the sequence of symbols based on an outcome of the first search and the second search.
Description
DESCRIPTION OF THE FIGURES
(1) In the figures,
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) Described herein are systems and methods for decoding data. The systems and methods described herein apply to a context in which a data sender sends data (i.e. a sequence of symbols referred to herein as a block, code, or block code) toward a data receiver using a communication medium referred to herein as a data channel. The data sender and the data receiver operate using a shared set of blocks, or a description of such a set of blocks, referred to as a codebook which consists of a number of codewords (i.e. a codeword refers to a block in the codebook). The input to the data channel or channel input (i.e. the block sent by the data sender into the data channel) may however differ from the corresponding output of the data channel or channel output (i.e. a corresponding block received by the data receiver from the data channel) due to an error (also referred to herein as noise) caused by transient passage of the block through the data channel (referred to herein as a noisy data channel). In order for the data receiver to correct the noise that alters the channel input, it is proposed herein to decode block codes using the combination of at least one first decoder (also referred to herein as a decoding unit) and at least one second decoder running in parallel with the first decoder, as illustrated in
(11) Referring to
(12) The data sender 22 may be any device capable of transmitting data (e.g., data packets) to another device, using the data channel 24, and the data receiver 10 may be any device capable of receiving data (e.g., data packets) from another device, using the data channel 24. The data channel 24 may be any suitable communication medium (or plurality of coupled media) capable of communicating data packets between data sender(s) and data receiver(s). The data channel 24 includes, but is not limited to, an analog channel, a digital channel, and any combination thereof.
(13) Throughout the instant disclosure, codes with triangular generator matrices are considered, although it should be understood that other suitable codes may apply. Let a code be noted as (N, K). The code rate is R=K/N. As used herein, the term rate or code rate refers to the ratio between the number (K) of information bits and the total number (N) of bits or code length (i.e., the number of information bits plus the number of redundancy bits which carry no information) in a given block code. Thus, a low-rate code refers to a code with high redundancy (i.e. that uses many redundancy bits), while a high-rate code refers to a code with low redundancy (i.e. that uses few redundancy bits).
(14) Let the information vector and the encoded vector be denoted as u.sub.1.sup.N=(u.sub.1, u.sub.2, . . . , u.sub.N) and x.sub.1.sup.N, respectively. Let G.sub.N be the generator matrix of size N, we have x.sub.1.sup.N=u.sub.1.sup.NG.sub.N=u.sub.1.sup.NF.sup..Math.n. F.sup..Math.n is calculated by the n-th Kronecker product of
(15)
Let the (i, j)-th entry of G.sub.N be denoted as g.sub.(i,j).
(16) Throughout the instant disclosure, binary phase-shift keying (BPSK) modulation and the additive white Gaussian noise (AWGN) channel are considered. The mapped bits .sub.1.sup.N are calculated as
.sub.1.sup.N=12x.sub.1.sup.N, where x.sub.1.sup.N{1,1}.sup.N. Let the received vector be denoted as y.sub.1.sup.N=(y.sub.1, y.sub.2, . . . , y.sub.N)=
.sub.1.sup.N+w.sub.1.sup.N, where w.sub.1.sup.N is the AWGN vector with mean 0 and variance .sup.2.
(17) The received noisy version of the modulated codeword is
(18)
In order to obtain the maximum likelihood (ML) estimate of the transmitted signal (i.e. of the channel input), an SD decoder considers only the candidate error patterns (also referred to herein as candidates) that reside within a N-dimensional sphere of a given radius and center.
(19) Let the Euclidean distance (ED) of bit i be denoted as D.sub.i(
(20)
where r.sub.0 is the radius. To reduce the large search space, SD, and particularly (Efficient multiple tree search SD (or EMSD), starts the first search with a small radius r.sub.0 and sets r.sub.0.sup.2=a. The following i-th search will start with the updated radius r.sub.i.sup.2=r.sub.i-1.sup.2+a until the output is found. To further reduce the complexity on low-rate codes, EMSD uses synchro sets to improve the metric evaluation. SD is thus efficient on low-rate codes (particularly polar codes) since the searching space is at most O(2.sup.K). As the rate grows, the decoding complexity of current SD techniques could be further reduced.
(21) Unlike the enumeration on u.sub.1.sup.N of SD, decoders using techniques such as Guessing Random Additive Noise Decoding (GRAND) guess noise to obtain the correct x.sub.1.sup.N. In one embodiment, based on
(22) To overcome these issues, it is proposed herein to combine at least one first decoder implementing a first decoding technique (e.g., SD) with at least one second decoder implementing a second decoding technique (e.g., GRAND) to develop an efficient decoder for both low-rate and high-rate codes, as illustrated in
(23)
(24) The systems and methods described herein may allow to extend GRAND to low-rate codes. The systems and methods described herein may indeed be used for near-ML decoding of short polar and Read-Muller (RM) codes, and more generally to any short code with a triangular generator matrix. It should however be understood that the systems and methods described herein may be applicable to other types of codes that do not have a triangular generator matrix. For example, linear block codes including, but not limited to, Bose-Chaudhuri-Hocquenghem (BCH) codes, CRC codes, and low-density parity-check (LDPC) codes may apply.
(25) In one embodiment, the first decoder 16 and the second decoder 18 operate concurrently, at substantially the same time. As shown in
(26) The failure of the current (e.g., ESD) search performed by the first decoder 16 thus updates a lower bound LB=r.sub.i.sup.2 for the second decoder 18, which can accelerate its operation (thus implementing a decoding technique referred to herein as accelerated GRAND) because the correct candidate should have a larger ED than the lower bound LB. In other words, upon detecting failure of the current search of the first decoder 16, the second decoder 18 is able to skip some candidate error patterns from its search.
(27) Compared with traditional GRAND, accelerated GRAND as implemented by the second decoder 18 is used to compare the ED of the current candidate (calculated as
(28)
with the lower bound LB (at 108). If the ED is smaller, this candidate must not be the answer and could be skipped without the codeword check, which could save time. In other words, the second decoder 18 moves on to the next candidate and repeats the comparison of the ED to the lower bound LB for the next candidate. The LB is updated when an ESD search failure occurs. When a candidate with a larger ED than the lower bound LB is found, the second decoder 18 assesses (at 110) whether to proceed with the codeword check and subsequently outputs the result of the codeword check when performed. If no codeword check is to be performed, the second decoder 18 proceeds with the next candidate and repeats the comparison of the ED to the lower bound LB for the next candidate.
(29) At the output of the decoding unit 14, there may be three cases. In the first case (illustrated in
(30) The decoding process implemented by the decoding unit 14 terminates at the earliest of the above three cases and delivers the output to the output unit 20. If only the first and second cases are considered, then the latency of the decoding process implemented by the decoding unit 14 will be at most the minimum latency of the first decoder 16 (e.g., the latency of EMSD) and the latency of the second decoder 18 (e.g., the latency of GRAND). However, the third case shows that the systems and methods described herein may even be faster than the minimum latency of EMSD and GRAND. In other words, in some embodiments, the decoding technique described herein may exhibit a minimum time complexity lower than the individual minimal complexities of EMSD and GRAND.
(31) As previously noted, any suitable GRAND technique may be used herein. If the second decoder 18 applies SGRAND, the error-correction performance of the decoding unit 14 will be ML. If the second decoder 18 applies ORBGRAND, the error-correction performance of the decoding unit 14 will be near-ML.
(32)
(33)
(34) The time complexity comparison of the schemes on fixed rates are shown on
(35)
(36) In the illustrated embodiment, step 506 is performed at at least one second decoder (e.g., to the second decoder(s) 18 of
(37) Step 508 comprises, at the at least one first decoder, outputting, to the at least one second decoder, an indication of a failure of the first search when no candidate error pattern is found within the search region.
(38) Step 510 comprises, at the at least one second decoder, skipping one or more of the plurality of candidate error patterns from the second search based on the indication of the failure of the first search, in the manner described herein above with reference to
(39)
(40) The memory 604 may comprise any suitable known or other machine-readable storage medium. The memory 604 may comprise non-transitory computer readable storage medium, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. The memory 604 may include a suitable combination of any type of computer memory that is located either internally or externally to device, for example random-access memory (RAM), read-only memory (ROM), compact disc read-only memory (CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically-erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM) or the like. Memory 604 may comprise any storage means (e.g. devices) suitable for retrievably storing machine-readable instructions 606 executable by the processing unit 602.
(41) In one embodiment, the methods and systems described herein may allow to achieve a latency lower than the minimum latency of the individual component decoders (i.e. of the SD and GRAND decoders) while allowing for enhanced error-correction performance. In some embodiments, the error-correction performance of the proposed HGRAND decoding technique is no worse than the lower of the two individual algorithms, i.e. ML if SGRAND is used, and near-ML if ORBGRAND is used. Moreover, the proposed HGRAND decoding technique implements an accelerated GRAND and may decode with lower latency than either of the component SD or GRAND decoders working individually. In addition, the systems and methods described herein may be used to extend GRAND to low-rate polar and RM codes.
(42) The above description is meant to be exemplary only, and one skilled in the art will recognize that changes may be made to the embodiments described without departing from the scope of the invention disclosed. Still other modifications which fall within the scope of the present invention will be apparent to those skilled in the art, in light of a review of this disclosure.
(43) Various aspects of the systems and methods described herein may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments. Although particular embodiments have been shown and described, it will be apparent to those skilled in the art that changes and modifications may be made without departing from this invention in its broader aspects. The scope of the following claims should not be limited by the embodiments set forth in the examples, but should be given the broadest reasonable interpretation consistent with the description as a whole.