DEVICE, SYSTEM AND METHOD FOR EFFICIENT COSET DECODER BY TRANSFORM
20180006767 ยท 2018-01-04
Inventors
Cpc classification
H03M13/451
ELECTRICITY
H03M13/45
ELECTRICITY
H04L1/0054
ELECTRICITY
International classification
H04L1/00
ELECTRICITY
H03M13/37
ELECTRICITY
H03M13/45
ELECTRICITY
Abstract
A device, system and method for decoding. A noisy version of an error correction codeword may be received, for example, over a noisy communication channel or retrieved from a memory device (e.g. a flash memory device). One or more symbol probabilities may be transformed, from an initial domain to a transformed domain, the symbol probabilities being one or more individual symbols of the received error correction codeword were transmitted as one or more symbols in candidate transmitted error correction codewords. In the transformed domain, a plurality of the transformed symbol probabilities may be composed to generate a combined coset probability defining the likelihood that the transmitted error correction codeword is associated with the individual symbols belongs to a particular one of a plurality of candidate cosets. A plurality of the coset probabilities for the plurality of respective cosets may be inverse transformed from the transformed domain to the initial domain.
Claims
1. A method for decoding, the method comprising: receiving a noisy version of an error correction codeword; transforming, from an initial domain to a transformed domain, one or more symbol probabilities that one or more individual symbols of the received error correction codeword were transmitted as one or more symbols in candidate transmitted error correction codewords; in the transformed domain, composing a plurality of the transformed symbol probabilities to generate a combined coset probability defining the likelihood that the transmitted error correction codeword is associated with the individual symbols belongs to a particular one of a plurality of candidate cosets, wherein the plurality of candidate cosets are mutually exclusive subsets of a codebook defining all possible values of the error correction codeword; and inverse transforming, from the transformed domain to the initial domain, a plurality of the coset probabilities for the plurality of respective cosets.
2. The method of claim 1 comprising receiving the noisy version of an error correction codeword transmitted by a source device to a receiver device over a communication channel.
3. The method of claim 1 comprising reading the noisy version of an error correction codeword previously stored in a memory device by a memory controller.
4. The method of claim 1, wherein the memory device is a flash memory device.
5. The method of claim 1 comprising, in the initial domain, executing a maximum likelihood coset criterion to select the candidate coset that has a maximum one of the plurality of coset probabilities to reduce the set of the plurality of candidate cosets to a most likely coset in which the transmitted error correction codeword most likely belongs.
6. The method of claim 1 comprising executing a sequential coset decoder to iteratively execute the maximum likelihood coset criterion, wherein, in each current iteration, the sequential coset decoder selects a most likely coset of the coset selected in the previous iteration, wherein the most likely coset of the current iteration has a smaller number of candidate error correction codewords than the most likely coset of the previous iteration.
7. The method of claim 6 comprising executing the sequential coset decoder a number of iterations until the number of candidate error correction codewords in a final coset generated in a final iteration is below a threshold number.
8. The method of claim 6 comprising executing the sequential coset decoder a number of iterations until a duration of time estimated for decoding the candidate error correction codewords in a final coset generated in a final iteration is below a threshold duration of time.
9. The method of claim 1 comprising executing a maximum likelihood codeword criterion on the selected coset in order to determine a most likely transmitted error correction codeword in the most likely coset that has a maximum likelihood codeword criteria value.
10. The method of claim 1 comprising executing multiple decoder circuits, in series, wherein the output of a first of the multiple decoder circuits is sent in the transformed domain, without inverse transforming the output, to a second of the multiple decoder circuits.
11. The method of claim 10 comprising decoding a polar code, wherein the multiple decoder circuits generate multiple coset probabilities for multiple kernels of the polar code, respectively, and combining the multiple coset probabilities to generate a combined coset probability for the entire polar code.
12. The method of claim 1, wherein the coset probabilities are generated for coset by combining the individual symbol probabilities associated with codewords that have a syndrome associated with the coset.
13. The method of claim 12, wherein the coset probabilities are generated for a subset space of syndromes.
14. The method of claim 1, wherein the coset probabilities are generated in the transformed domain using fewer computations than would be used in the initial domain.
15. The method of claim 14 comprising reducing the number of computations from an order of at most n.Math.|F|.sup.k.sup.
16. The method of claim 1, wherein the transformed symbol probabilities are composed in the transformed domain using multiplication or addition computations representing convolution computations in the initial domain.
17. The method of claim 1, wherein the transforming is performed using a Hardamard transform.
18. The method of claim 1, wherein the transformed domain is a log domain and the plurality of coset probabilities are computed based on log likelihoods.
19. A device comprising: one or more memories to store a noisy version of an error correction codeword; and one or more processors to: transform, from an initial domain to a transformed domain, one or more symbol probabilities that one or more individual symbols of the received error correction codeword were transmitted as one or more symbols in candidate transmitted error correction codewords; in the transformed domain, compose a plurality of the transformed symbol probabilities to generate a combined coset probability defining the likelihood that the transmitted error correction codeword is associated with the individual symbols belongs to a particular one of a plurality of candidate cosets, wherein the plurality of candidate cosets are mutually exclusive subsets of a codebook defining all possible values of the error correction codeword, inverse transform, from the transformed domain to the initial domain, a plurality of the coset probabilities for the plurality of respective cosets.
20. The device of claim 19 comprising a receiver device for receiving the noisy version of an error correction codeword transmitted by a source device over a communication channel.
21. The device of claim 19 comprising a memory device including a memory controller, wherein the one or more processors read the noisy version of an error correction codeword previously stored in the memory device by the memory controller.
22. The device of claim 21, wherein the memory device is a flash memory device.
23. The device of claim 19, wherein the processors are configured to, in the initial domain, execute a maximum likelihood coset criterion to select the candidate coset that has a maximum one of the plurality of coset probabilities to reduce the set of the plurality of candidate cosets to a most likely coset in which the transmitted error correction codeword most likely belongs.
24. The device of claim 19, wherein the processors are configured to execute a sequential coset decoder to iteratively execute the maximum likelihood coset criterion, wherein, in each current iteration, the sequential coset decoder selects a most likely coset of the coset selected in the previous iteration, wherein the most likely coset of the current iteration has a smaller number of candidate error correction codewords than the most likely coset of the previous iteration.
25. The device of claim 19, wherein the processors are configured to execute a maximum likelihood codeword criterion on the selected coset in order to determine a most likely transmitted error correction codeword in the most likely coset that has a maximum likelihood codeword criteria value.
26. A non-transitory computer readable medium comprising instructions which when implemented on one or more processors in a maximum likelihood coset decoder cause the decoder to: receive a noisy version of an error correction codeword; transform, from an initial domain to a transformed domain, one or more symbol probabilities that one or more individual symbols of the received error correction codeword were transmitted as one or more symbols in candidate transmitted error correction codewords; in the transformed domain, compose a plurality of the transformed symbol probabilities to generate a combined coset probability defining the likelihood that the transmitted error correction codeword is associated with the individual symbols belongs to a particular one of a plurality of candidate cosets, wherein the plurality of candidate cosets are mutually exclusive subsets of a codebook defining all possible values of the error correction codeword; and inverse transform, from the transformed domain to the initial domain, a plurality of the coset probabilities for the plurality of respective cosets.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
[0013]
[0014]
[0015]
[0016]
[0017]
[0018] , according to an embodiment of the invention;
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029] It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
[0030] Error Correcting Codes (ECC) may be used to mitigate noisy channels in communications. An optimal decoder may minimize an output block error rate by using maximum likelihood (ML) criterion (e.g., equations (1)-(2)). However, in many cases, implementing such maximum likelihood (ML) criterion is prohibitive because of the high complexity of the ML computations required to examine all possible candidate codewords in the codebook.
[0031] According to embodiments of the invention, a sequential decoder may implement a multi-step decoding process, where, in each step, the decoder may reduce the number of possible or candidate codewords to be evaluated for the ML criteria. A sequential coset decoder exploits the property that the ML candidate codewords in each sequential step i is always an element of a coset of the code of ML candidate codewords of the previous step i1. Cosets may be mutually exclusive (non-overlapping) subsets of a code including different codewords from a codebook of all possible error correction codewords. Each coset may include a distinct set of codewords, where the union of all cosets of the code is equal to the code and includes all possible codewords in the code. In one example, a coset may define a subset of the codebook or possible codewords set that is an affine linear space (described by a translation of a linear space by a constant vector). For example, the set of all binary vectors of length n with an even number of ones and the set of all binary vectors of length n with an odd number of ones are two cosets of the set of binary vectors. Example sequential coset decoders may be used for applications, such as, successive cancellation (SC) and belief propagation (BP), employed on general concatenated codes such as polar codes.
[0032] In each step, sequential coset decoders may select the most likely coset from the set of possible coset candidates based on one or more maximum likelihood criteria (e.g., equations (3)-(7)). For example, sequential coset decoders may calculate, for each coset, a numerical value or likelihood score and select the coset with the highest value. Embodiments of the invention may provide methods and apparatuses for efficient implementation of sequential coset decoders that reduce power, area and time of computations. These apparatuses and methods may be used in a variety of applications, including, for example, mobile applications, memory applications, and other suitable applications.
[0033] A method and apparatus according to embodiments of the present invention is provided having an efficient implementation of coset decoding based on a transformation of the input likelihoods into an isomorphic structure that allows fast processing, for example, of coset likelihoods (e.g., equations (8)-(11)). After this processing is performed an inverse transform may retrieve the results in the required format (e.g., equation (13)-(14)). Using this format the most probable coset may be selected according (e.g. according to equation (3)).
[0034] According to some embodiments of the present invention, multiple coset decoders or circuits may be used, for example, in series, where the output of a first coset decoder circuit is an input to a second coset decoder circuit. Thus, the coset likelihood processing of the first coset decoder circuit is an intermediate step in a calculation that is provided to the additional second coset decoder circuit. In such embodiments, the likelihood data output from the first coset decoder circuit may remain in the transform space and the inverse transform operation may be eliminated so the input to the second coset decoder circuit includes the processed results of the transformed input. Transform space results may continue to be iteratively transferred to an additional one or more circuits, for example, until the likelihood values of cosets is needed, in which case an inverse transform may be performed.
[0035] Various transformations may be used according to embodiments of the present invention. The transformation may include an isomorphic transformation (e.g., an invertible transformation in which the transformed elements are equivalent under isomorphism), a bijective transformation (e.g., a one-to-one correspondence between elements of the transformed domain and the initial domain), a Fourier transform, a fast Fourier transform, a Hadamard transform, a fast Hadamard transform, or any transform in which a convolution operation in the original domain is simplified in the transform domain. Further embodiments of the present invention may perform a log of the above transformations to operate in the log domain, such that the input to the decoder is the logarithm of the likelihood and a log of the transform is calculated. Other transformations may be used.
[0036] The following notations and conventions are used herein. For any natural number E, the term [l].sub. may denote the set of size l, {0, 1, 2, . . . , l1}. Vectors may be denoted by bold letters and random variables may be denoted by capital letters (e.g., random vectors may be denoted by bold upper-case letters). For ji, u.sub.j.sup.i=[u.sub.j, u.sub.j+1, . . . . , u.sub.i] may denote a sub-vector of vector u of length ij+1 (if i<j, then u.sub.j.sup.i=[ ], the empty vector, and its length is 0). Vectors described herein are assumed to be row-vectors, although column vectors, sequences, or other data structures may be used. Column vectors are generated from row vectors u by the transpose operation, u.sup.T.
[0037] Given a set of possible symbol values F, an error correction code (ECC) (denoted as C) of length n symbols may be a subset of an n-dimensional field F.sup.n. The rate of C may be defined, for example, as
based on the size or magnitude of the code |C|, the size of the field |F| and the length of the code n. For example, as the codeword length n increases, the rate R decreases and the speed decreases for communicating or transferring information. If the set of possible signal values F is a field and C is a linear space in F.sup.n, C is a linear code over F. Such a code C may be defined, in one example, by a generating matrix B having, e.g., k=R.Math.n rows and n columns, such that C={v.Math.B|vF.sup.k)}. The parameter k may denote the code dimension, e.g., such that |C|=|F|.sup.k (the code dimension k is equal to the base |F| logarithm of the size of the code). A dual space of a linear code C may be denoted as C.sup. and may be defined as a plurality of all vectors in F.sup.n that are orthogonal to all the codewords in C. Thus, the dual space C.sup. is orthogonal to its code space C such that any codeword c.sup.C.sup. is orthogonal to any codeword cC, e.g., c.sup..Math.c=0. Codewords c.sup. of the dual space C.sup. may be generated by a dual space generating matrix H (e.g., a parity check matrix of C) that may have, for example, n columns and nk rows, such that cC if and only if c.Math.H.sup.T.Math.=0. A syndrome s of a length n vector v may be defined, for example, as s=v.Math.H.sup.T. Thus, the syndrome of codewords cC is, for example, zero. A syndrome may measure the result of applying the parity check equations (e.g., columns of H) to the values of a codeword v. When all the parity check equations are satisfied (e.g., v.Math.H.sup.T is equal to the zero vector), then codeword v is a codeword in C.
[0038] Consider a channel carrying signals x.fwdarw.y in which a signal xX is transmitted using a transmitter over the channel and a noisy version of the signal yY is received at a receiver. The channel may be memoryless, meaning that the channel noise at one time for one transmission x.sub.i.fwdarw.y.sub.i is independent of the channel noise at another time for another transmission x.sub.j.fwdarw.y.sub.j. A maximum likelihood (ML) codeword decoder may determine for each received channel observation vector y, the most probable transmitted codeword x, for example, by maximizing the following likelihood:
Pr(Y.sub.0.sup.n-1=y.sub.0.sup.n-1|X.sub.0.sup.n-1=x.sub.0.sup.n-1)=.sub.j=0.sup.n-1Pr(Y.sub.j=y.sub.j|X.sub.j=x.sub.j)(1).
This likelihood defines the probability or likelihood of receiving a channel observation vector y if a codeword x is sent over the channel. The maximum likelihood codeword decoder may be defined, for example, as:
{circumflex over (x)}=argmax.sub.x.sub.
to detect the transmitted codeword {circumflex over (x)}=x.sub.0.sup.n-1C that maximizes this probability, e.g., equation (1). If each codeword in an ECC C is sent over the channel with same probability (e.g., the system has no preference or bias for certain codewords over others, such as preferring 0's over 1's in a binary system), then this maximum likelihood criterion corresponds to a minimum block error probability Pr({circumflex over (X)}X) defining a minimum probability of error that the estimated codeword {circumflex over (x)} is incorrect. If codewords in an ECC C are transmitted with bias, then the maximum likelihood criterion above may be replaced with Pr(Y.sub.0.sup.n-1=y.sub.0.sup.n-1|X.sub.0.sup.n-1=x.sub.0.sup.n-1) Pr(X.sub.0.sup.n-1=x.sub.0.sup.n-1) to take the preference into account. In such a case, the criterion may be referred to as the maximum posteriori probability (MAP) criterion.
[0039] According to some embodiments of the invention, C.sup.(1) and C.sup.(2) may be two linear codes in a space F.sup.n. C.sup.(1) may be a set of all possible codewords of length n and may contain C.sup.(2), for example, C.sup.(2) C.sup.(1), where C.sup.(2) may be referred to as a sub-code of C.sup.(1). k.sub.1 and k.sub.2 may be used to denote the code dimensions of C.sup.(1) and C.sup.(2), respectively. may be used to denote a codeword C.sup.(1). A coset of C.sup.(2) in C.sup.(1) induced by codeword may be defined, for example, as {+c|cC.sup.(2)}. This coset may be denoted as +C.sup.(2) or C.sup.(2)+. Because linear spaces are closed under addition, and both codeword and sub-code C.sup.(2) are contained in C.sup.(1), their sum, coset +C.sup.(2), is also contained in C.sup.(1), for example, +C.sup.(2) C.sup.(1). Since C.sup.(1) has |F|.sup.k.sup.
distinct codeword subsets. Furthermore, all the vectors in the same coset have the same syndrome. In other words, if x+C.sup.(2), wF.sup.n and H is a parity check matrix of C.sup.(2), then x.Math.H.sup.T=w.Math.H.sup.T if and only if w+C.sup.(2).
[0040] Embodiments of the invention may exploit this syndrome equivalence to identify cosets and thus reduce the maximum likelihood codeword decoder to operate over a smaller coset of the code. This may reduce the relatively high number of computations associated with computing probabilities for all codewords in C.sup.(1) (in maximum likelihood codeword decoding) to a relatively smaller number of computations associated with computing probabilities for the relatively fewer codewords in the maximum likelihood coset of C.sup.(2) in C.sup.(1) (in maximum likelihood coset decoding), thereby reducing computational effort and increasing decoding efficiency. Whereas the maximum likelihood codeword decoder detects the exact codeword that was transmitted, the maximum likelihood coset decoder detects the coset in C.sup.(2) to which the transmitted codeword belongs. Let C.sup.(1)/C.sup.(2) be the set of cosets of C.sup.(2) in C.sup.(1) also known as the factor group of C.sup.(2) in C.sup.(1). The maximum likelihood coset decoder may be defined, for example, as:
{circumflex over (A)}=argmax.sub.AC.sub.
[0041] In case the channel is memoryless, the maximum likelihood coset decoder of equation (3) requires
additions and |C.sup.(1)|.Math.(n1)=|F|.sup.k.sup.
where H.sub.j is the j.sup.th column in the matrix H, and H.sup.[j] is the matrix that contains columns 0 . . . j of H. Using the notation, g.sub.j(s)=Pr(Y.sub.j=y.sub.j|X.sub.j=x.sub.j) if there exists x.sub.jF such that s=x.sub.j.Math.(H.sub.j).sup.T; otherwise g.sub.j(s)=0. Further denoting,
[0042] Equation (5) represents the maximum likelihood coset criterion as a convolution operation, e.g., between g.sup.(n-2)() and g.sub.(n-1)(), denoted as g.sub.(n-1)*g.sup.(n-2), where,
g.sup.(n-1)=g.sub.0*g.sub.1*g.sub.2* . . . *g.sub.n-1(6).
To calculate g.sup.(n-1)(s) for all syndromes sF.sup.n-k.sup.
[0043] Coset decoders shrink the target set of candidate codewords from an initial relatively large code C.sup.(1) to a smaller coset +C.sup.(2) partition of C.sup.(1), thereby reducing the number of maximum likelihood computations used to evaluate the target codewords. This reduction may be sequentially repeated multiple times to iteratively reduce or narrow the size of the target coset and the associated number of maximum likelihood computations using a sequential codeword decoder.
[0044] Sequential codeword decoders contain a plurality of sequential decoding steps such that, in each step, the size or number of candidates in each coset is decreased. Let C.sup.(1), C.sup.(2), . . . , C.sup.(m) be a chain of nested linear codes such that C.sup.(i+1) is a sub-code of C.sup.(i), for i=1, 2, . . . , m1. A codeword sequential decoder for C.sup.(1), induced by this chain of nesting may be defined, for example, as follows:
[0045] Step (i): A codeword {circumflex over ()}.sup.(i) may be received as an output of a previous step (i1), for example, where i2, and {circumflex over ()}.sup.(1)=0. A codeword from the coset {circumflex over ()}.sup.(i)+C.sup.(i) may be used to determine a next sequential coset {circumflex over ()}.sup.(i+1)+C.sup.(i+1), for example, using the following maximum likelihood sequential coset decoder criterion:
where {circumflex over ()}.sup.(i+1) may be selected such that {circumflex over ()}.sup.(i+1)+C.sup.(i+1)={circumflex over ()}.sup.(i)+.
[0046] Step (i) may be repeated sequentially, e.g., for 1im2, for some integer m. In each sequential step (i), the input coset {circumflex over ()}.sup.(i)+C.sup.(i) may be reduced to a relatively smaller coset {circumflex over ()}.sup.(i+1)+C.sup.(i+1). In each step (i) of the sequential decoder, the size of the coset may decrease, e.g., by a multiplicative factor of
and the number of maximum likelihood coset computations may also decrease, e.g., by the same factor. At a certain iteration, the size of the final coset {circumflex over ()}.sup.(m)+C.sup.(m) may be sufficiently small that the computational effort required for ML codeword decoding (computing individual likelihoods for all candidate codewords in the coset) is minimal, reduced, or below threshold, thereby outweighing the benefit of further reducing the coset. The step (i) iteration may stop and the process may proceed to step (m), for example, when the size of the final coset {circumflex over ()}.sup.(m)+C.sup.(m) is sufficiently small or below threshold, the number of computations used for ML codeword decoding is sufficiently small or below threshold, and/or the time estimated for executing codeword decoding is a sufficiently small or below threshold (e.g., to allow real-time decoding applications such as communications). In one example, the decoder may switch from sequential coset decoding (step (i)) to codeword decoding (step (m)) when the number of codeword decoding computations is less than or equal to the number of coset decoding computations (or a fraction or multiple thereof).
[0047] Step (m) may perform codeword ML decoding on coset {circumflex over ()}.sup.(m)+C.sup.(m). Because the coset {circumflex over ()}.sup.(m)+C.sup.(m) has a sufficiently small number of codewords, this ML codeword computation may be executed quickly and efficiently. In some examples, the number of codewords in a sufficiently small coset and the number of computations required in step m may vary, e.g., 10, 100, 1000, but is generally fewer than 1,000-10,000. In other examples, other threshold numbers of codewords or steps may be used.
[0048] Examples of codes decoded using coset ML decoders may include general concatenated codes (GCC) ECCs and general polar codes. General concatenated codes (GCC) ECCs are an example in which coset decoders may be applied on inner-codes. General polar codes are a sub-ensemble of the GCC in which concatenation is recursive, e.g., the outer-codes themselves are GCCs. As such, decoding operations for polar codes, e.g. successive cancellation (SC) and belief propagation (BP), may include coset ML decoders as sub-operations within the decoding process. These example codes and operations are not intended to be limiting.
[0049] Some embodiments of the invention may operate in the log domain. In such embodiments, the above likelihoods or probabilities may be measured in the log domain, for example, using log-likelihoods (LLs) such as log.sub.2 (Pr(Y=y|X=x)) or normalized versions thereof such as log-likelihood ratios (LLRs). The above calculations may also be performed in the log domain, which may in some cases improve numerical stability. In such embodiments, multiplication in equation (5) may be replaced by log-likelihood additions; additions in equation (5) may be performed in the log domain using a max*(,) operator, e.g., where max*(,)=max(,)+log.sub.2(1+2.sup.||).
[0050] One bottleneck of coset ML decoders is the coset likelihood calculations, e.g., equations (3), (6) and (7). In particular, the convolution operations in equation (5) are relatively complex and often delay decoding processes. Embodiments of the invention may reduce decoding complexity by, prior to processing, transforming the likelihoods from an initial domain to a transform domain to simplify the coset likelihood computations. For example, received likelihoods may be transformed into a transform domain where the convolution operations are replaced by much simpler multiplication operations, or a log transform domain where the convolution operations are replaced by much simpler addition operations, thereby decrease the computational complexity and increase decoding speed. Let C.sup.(1) and C.sup.(2) be two linear codes of length n over alphabet F, with dimensions k.sub.1, k.sub.2 such that k.sub.1>k.sub.2. Assume that C.sup.(2) is a subcode of C.sup.(1) and we would like to calculate the coset likelihoods in (3). Using the convolution method (direct method) requires O(n.Math.|F|.sup.k.sup.
[0051] Reference is made to
[0052] Transmitter(s) 110 and receiver(s) 112 may include one or more controller(s) or processor(s) 118 and 120, respectively, configured for executing operations or computations disclosed herein and one or more memory unit(s) 122 and 124, respectively, configured for storing data such as inputs or outputs from each of the operations or computations and/or instructions (e.g., software) executable by a processor, for example for carrying out methods as disclosed herein. Processor(s) 120 may decode a noisy version of the codeword sent from transmitter(s) 110 by first transforming, from an initial domain to a transformed domain, one or more symbol probabilities that one or more individual symbols of the received error correction codeword were transmitted as one or more symbols in candidate transmitted error correction codewords. In the transformed domain, processor(s) 120 may compose a plurality of the transformed symbol probabilities to generate a combined coset probability defining the likelihood that the transmitted error correction codeword is associated with the individual symbols belongs to a particular one of a plurality of candidate cosets. The plurality of candidate cosets may be mutually exclusive subsets of a codebook defining all possible values of the error correction codeword. Processor(s) 120 may inverse transforming, from the transformed domain to the initial domain, a plurality of the coset probabilities for the plurality of respective cosets. Processor(s) 120 may use the plurality of the coset probabilities for ML coset decoding, sequential ML coset decoding, approximated ML codeword decoding, polar code decoding, or other operations.
[0053] Processor(s) 118 and 120 may include, for example, a central processing unit (CPU), a digital signal processor (DSP), a microprocessor, a controller, a chip, a microchip, an integrated circuit (IC), or any other suitable multi-purpose or specific processor or controller. Processor(s) 118 and 120 may individually or collectively be configured to carry out embodiments of a method according to the present invention by for example executing software or code. Memory unit(s) 122 and 124 may include, for example, random access memory (RAM), dynamic RAM (DRAM), flash memory, volatile memory, non-volatile memory, cache memory, buffers, registers, short term memory, long term memory, or other suitable memory units or storage units. Processor(s) 120 may be part of a general-purpose processor executing special-purpose decoding software or a special-purpose hardware unit that is a dedicated decoder 126.
[0054] Transmitter(s) 110 and receiver(s) 112 may include one or more input/output devices, such as a monitor, screen, speaker or audio player, for displaying, playing or outputting to users results provided by the decoder (e.g., data communicated by transmitter(s) 110 decoded by decoder 126) and an input device (e.g., such as a mouse, keyboard, touchscreen, microphone, or audio recorder) for example to record communication, control the operations of the system and/or provide user input or feedback, such as, selecting one or more encoding or decoding parameters, such as, decoding speed, decoding accuracy, a number or threshold number of iterations for a sequential decoder, a threshold size of a coset that may be used for codeword decoding, a threshold amount of time or duration allowable for coset decoding, codeword decoding, or total decoding, a threshold maximum time delay allowable between the time of receiving a signal and the time of decoding the signal, etc.
[0055] Reference is made to
[0056] In
[0057] Memory controller 158 performs the following tasks: (a) to provide the most suitable interface and protocol for both the host 150 and the memory system 156; (b) to efficiently handle data, maximize transfer speed, and maintain data integrity and information retention. In order to carry out such tasks, some embodiments of the invention implement an application specific device, for example, embedding one or more processor(s) (e.g. 176 and 180, usually 8-16 bits), together with dedicated hardware or software to handle timing-critical tasks. Generally speaking, memory controller 158 can be divided into multiple parts (e.g., parts 160, 162, 164 and 170), which are implemented either in hardware or in firmware.
[0058] Describing components of memory system 156 from top-to-bottom in
[0059] Memory controller 158 may include one or more controller(s) or processor(s) 176 and 180 for implementation of the ECC encoder 166 and the ECC decoder 168, respectively, configured for executing operations or computations disclosed herein and one or more memory unit(s) 178 and 182, respectively, configured for storing data such as inputs or outputs from each of the operations or computations and/or instructions (e.g., software) executable by a processor, for example for carrying out methods as disclosed herein. Processor(s) 180 may decode a noisy version of the codeword that was computed by the encoder processor 176 by first transforming, from an initial domain to a transformed domain, one or more symbol probabilities that one or more individual symbols of the received error correction codeword were programmed as one or more symbols in candidate programmed error correction codewords. In the transformed domain, processor(s) 180 may compose a plurality of the transformed symbol probabilities to generate a combined coset probability defining the likelihood that the transmitted error correction codeword is associated with the individual symbols belongs to a particular one of a plurality of candidate cosets. The plurality of candidate cosets may be mutually exclusive subsets of a codebook defining all possible values of the error correction codeword. Processor(s) 180 may inverse transform, from the transformed domain to the initial domain, a plurality of the coset probabilities for the plurality of respective cosets. Processor(s) 180 may use the plurality of the coset probabilities for ML coset decoding, sequential ML coset decoding, approximated ML codeword decoding, polar code decoding, or other operations.
[0060] Processor(s) 176 and 180 may include, for example, a central processing unit (CPU), a digital signal processor (DSP), a microprocessor, a controller, a chip, a microchip, an integrated circuit (IC), or any other suitable multi-purpose or specific processor or controller. Processor(s) 176 and 180 may individually or collectively be configured to carry out embodiments of a method according to the present invention by for example executing software or code. Memory unit(s) 178 and 182 may include, for example, random access memory (RAM), dynamic RAM (DRAM), flash memory, volatile memory, non-volatile memory, cache memory, buffers, registers, short term memory, long term memory, or other suitable memory units or storage units.
[0061] Reference is made to
[0062] In operation 101, one or more processor(s) may receive an input to the decoder, for example, one or more individual symbol probabilities measure the likelihoods that one or more individual symbols x of the transmitted error correction codeword X.sub.j were received as one or more symbols y.sub.j in the received noisy error correction codeword Y.sub.j, or y.sub.0.sup.n-1 (also known as the channel observations vector), e.g., Pr(Y.sub.j=y.sub.j|X.sub.j=x) for each xF and 0jn1. Alternatively or additionally, the processor(s) may compute one or more log likelihoods, e.g., log.sub.2 (Pr(Y.sub.j=y.sub.j|X.sub.j=x)) or log-likelihood ratios log.sub.2 (Pr(Y.sub.j=y.sub.j|X.sub.j=0))log.sub.2(Pr(Y.sub.j=y.sub.j|X.sub.j=x)) for xF\{0}. Let C.sup.(1) and C.sup.(2) be linear codes over field F of dimensions k.sub.1, k.sub.2 such that k.sub.1>k.sub.2 and C.sup.(2) is a subcode of C.sup.(1).
[0063] In operation 102, one or more processor(s) may perform a transformation of the individual symbol probabilities, for example, to uniquely represent the likelihoods in a transformed space. The transformation may include an isomorphic transformation, a bijective transformation, a Fourier transform, a fast Fourier transform, a Hadamard transform, a fast Hadamard transform, etc. Other transformations may be used.
[0064] In operation 103, one or more processor(s) may compose the plurality of transformed individual symbols probabilities to generate a combined coset probability for each of a plurality of cosets of a code. Each coset probability may define a likelihood that the transmitted error correction codeword associated with the individual symbols belongs to a particular one of a plurality of candidate cosets. In one example, operation 103 may generate a plurality of |F|.sup.k.sup.
[0065] In operation 104, one or more processor(s) may perform an inverse transformation on the plurality of coset probabilities for the plurality of respective cosets to generate the plurality of corresponding coset probabilities in the original domain.
[0066] In operation 105, one or more processor(s) may perform post processing on the inverse transformed data. Post processing is done to convert the output of the inverse transform to what is expected at the next stage. An example may be when the output on 106 is expected to be LLR. In one embodiment of the inverse transform outputs LLs and by appropriate normalization 105 converts it to LLRs. In other embodiments, other normalizations may be used for example to scale the output to a particular range such as a dynamic range.
[0067] In operation 106, the decoder may send the plurality of coset probabilities in the original domain, for example, via a wire, to a next operation or circuit that performs the next stage, such as, ML coset decoding (e.g., according to equation (3)), sequential ML coset decoding, ML codeword decoding (e.g., according to equation (2)), polar code decoding or other operations in order to complete the decoding operation.
[0068] According to some embodiments of the invention, multiple decoder circuits as described in reference to
[0069] Reference is made to =b), where b,
([p].sub.).sup.m), may be defined from ([p].sub.).sup.m (an m-length vector, where each index has a value between 0 and p1)) to a field e.g., of complex numbers,
. A Hadamard transformation F() 202 may also be defined from ([p].sub.).sup.m to
. For vectors and b, .Math.b.sup.T=.sub.i=0.sup.m-1.sub.i.Math.b.sub.i, where multiplication and summation are defined as GF(p) operations. Property 204 is an orthogonality property, e.g., between functions
where may be the argument of the functions, b and b may be constants, and .sub.x,y=1 if and only if x=y and otherwise .sub.x,y=0. Equation 205 may define an inverse transformation. Property 206 is a convolution property, e.g., if g() is a convolution of two functions .sub.1(),.sub.2(), then its Hadamard transform G is equal to multiplications of the Hadamard transforms of .sub.1(), .sub.2 (). Property 207 is a composite function property defining that if g(b)=(b.Math.), where is an invertible matrix of m rows and m columns over [p].sub., then the Hadamard transform of g() equals F(.Math.(.sup.1).sup.T), where F() is the Hadamard transform of (). Note that the inverse of the matrix F may be calculated over GF(p). Property 208 handles the case of a function g(b) where g(b)=({tilde over (b)}) if b={tilde over (b)}.Math. where is a full rank matrix of l rows and m columns, b, are of length m and {tilde over (b)} is of length l where (lm). In other cases g(b)=0. The Hadamard transform of g may be equal in this case to F(.Math..sup.T).
[0070] Further in accordance with at least one embodiment of the present invention, computation of the Hadamard transform and its inverse may be executed using a fast Hadamard transform (FHT) as shown in
[0071] Reference is made to ([b b.sub.1.sup.m-1]) for b[p].sub.. A closer look at this expression reveals that F(.sub.0.sup.m-1) may be derived by a Hadamard transform of size p (denoted in equation 303 as
T operator) on the function g(b)
F.sup.(b)(w.sub.1.sup.m-1).
[0072] Reference is made to , according to an embodiment of the invention. As such, equations 401-405 are a special case of equations 201-205, respectively. In the binary case, the transform, e.g., Hadamard transform, contains only additions and subtractions. The equations of
, then 2.sup.a
(1).sup.sign(a).Math.2.sup.mag(a).
[0073] Reference is made to
[0074] Reference is made to
[0075] In
[0076] Certain embodiments of the present invention may use a circuit as described in as described in
may perform m.Math.2.sup.m-1 iterations of the circuit of
[0077] In some embodiments, a transmitter (e.g., transmitter 110 of .sup.n over a memoryless channel. In other embodiments, a memory controller (e.g., 158 of
.sup.n corresponding to the stored codewords. C.sup.(1) and C.sup.(2) may be two linear codes of length n over [p].sub., with dimensions k.sub.1, k.sub.2 respectively, such that C.sup.(2)C.sup.(1) and k.sub.1>k.sub.2. Let H([p].sub.).sup.(n-k.sup.
, for 0jn1 may define the symbol probabilities that one or more individual symbols b of the received or read error correction codeword X associated with codewords that have a syndrome s were received as one or more symbols y.sub.j in received noisy error correction codewords Y.sub.j, for example, as.
In order to calculate the coset probabilities:
g(s)=.sub.x s.t. H.Math.x.sub.
the following convolution may be performed between the symbol probabilities (g.sub.j(s)).sub.j=0, 1, 2, . . . , n-1, e.g.:
g(s)=(g.sub.0*g.sub.1*g.sub.2* . . . *g.sub.n-1)(s).(10).
[0078] Embodiments of the present invention may also calculate a transform of symbol probabilities g.sub.j(s), denoted as G.sub.j(), simplifying the convolution operation for the coset probability of (10) to the following multiplication operation in the transform space:
G()=G.sub.0()G.sub.1()G.sub.2().Math. . . . .Math.G.sub.n-1().(11).
[0079] If operating in the log-transform space, the convolution operation of the coset probability of (10) would be simplified to an addition operation in the transform space:
log.sub.2(G())=log.sub.2(G.sub.0()+log.sub.2(G.sub.1())+log.sub.2(G.sub.2()+ . . . +log.sub.2(G.sub.n-1()).
[0080] The transform G.sub.j() may be computed (e.g., in operation 102 of
[0081] After the coset probability in equation (11), or its log-domain equivalent, is calculated, the inverse Hadamard transform may be computed for retrieving the coset probability g(s) corresponding to syndrome s in the initial domain (e.g., in operation 104 of
[0082] In some embodiments of the invention, C.sup.(1) may be a partial space or sub-space of the field with dimension, k.sub.1<n. In such cases, decoding over then entire space may provide an over-inclusive set of syndromes. In many cases, the subset of interesting syndromes S (e.g., representing cosets of C.sup.(2) in C.sup.(1)) may be selected using, for example, the following condition:
A(0,1, . . . nk.sub.21),|A|=nk.sub.2(k.sub.1k.sub.2)=nk.sub.1(12). [0083] s.t. sS, iA, s.sub.i=0,
[0084] A subset of the indices of the syndromes, A, may be generated such that all the syndromes of C.sup.(2) in C.sup.(1) are zeros over these indices. It may be assumed that A={0, 1, . . . , nk.sub.1} without loss of generality. Accordingly, the coset probability g(s) may be computed for only the subset of A in which s.sub.0.sup.n-k.sup.
Some embodiments may compute {tilde over (G)}(.sub.n-k.sub.
[0085] Reference is made to
[0086] In operation 702, one or more processor(s) may perform a Hadamard transform on the input symbol probabilities, e.g., equation (8), to generate a Hadamard transform probabilities F.sub.j() of size p.
[0087] In operation 703, one or more processor(s) may compute the coset probabilities G() based on those symbols e.g., defined by equation (11), in the transform domain. In one example, the processor(s) may compute G.sub.j() in 703 using property 208 in
[0088] In operation 704, one or more processor(s) may compute the coset probability for the subset space of syndromes {tilde over (G)}(.sub.n-k.sub.
[0089] In operation 705, one or more processor(s) may perform a Hadamard inverse transform, e.g., equation (13), of size p.sup.k.sup.
[0090] Other operations or orders of operations may be executed.
[0091] Certain embodiments of the invention operate on extension fields of GF(p), e.g., GF(p.sup.r). In some embodiments, a transmitter (e.g., receiver 110 of .sup.n over a memoryless channel. In other embodiments, a memory controller (e.g. 158 of
.sup.n corresponding to the stored codewords. C.sup.(1) and C.sup.(2) may be two linear codes in the extension fields (GF(p.sup.r)).sup.n, of dimensions k.sub.1, k.sub.2 respectively, such that C.sup.(2) C.sup.(1) and k.sub.1>k.sub.2. H(GF(p.sup.r)).sup.(n-k.sup.
, for 0jn1 may be defined, for example, as:
[0092] Embodiments of the invention may reduce this problem to the GF(p) scenario, for example, using the observation that every code of length n over the extension field GF(p.sup.r) can be represented as a code of length n.Math.r over the field GF(p). This representation may be defined such that for each codeword cGF(p.sup.r).sup.n, a matching codeword {tilde over (c)}GF(p).sup.rn may be defined, such that {tilde over (c)}.sub.j.Math.r.sup.(j+1).Math.r-1 is the Cartesian representation of c.sub.j, for all 0jn1.
[0093] An equivalent parity check matrix {tilde over (H)}(GF(p)).sup.r(n-k.sup.
[0094] Reference is made to
[0095] In operation 802, one or more processor(s) may perform a Hadamard transform on the input, e.g., equation (15), to generate a Hadamard transform F.sub.j() of size p.sup.r.
[0096] In operation 803, one or more processor(s) may compute equation (11) in the transform domain. In one example, the processor(s) may compute G.sub.j() in 803 using property 208 in
[0097] In operation 804, one or more processor(s) may compute equation (14) in the transform domain. This computation may be used to reduce the complexity of the inverse transform used in step 805.
[0098] In operation 805, one or more processor(s) may perform a Hadamard inverse transform, e.g., equation (13), of size p.sup.(k.sup.
[0099] Other operations or orders of operations may be executed.
[0100] Examples according to some embodiments of the present invention are described as follows.
Example I
[0101] Reference is made to
[0102] In 902, the field GF(2.sup.2) may be defined using a zero of the primitive polynomial p(x)=x.sup.2+x+1, which may be denoted as . Consequently, in this example, there are four elements in the field (denoted by bold face fonts), 0, 1, and .sup.2. These elements may also be represented in Cartesian form as binary two tuples, for example, [0 0], [1 0], [0 1] and [1 1], respectively. Each element in the field may be assigned a number, symbol, or other representative value, such as, ordinal numbers 0, 1, 2, 3, respectively, in round brackets in 902, for example, to simplify their representation.
[0103] 904 provides an example conversion from parity-check matrix H into a binary matrix {tilde over (H)}{0,1}.sup.28. The conversion may be performed by converting the elements in H into a 22 matrix according to 903. Vectors over GF(2.sup.2) may be converted to binary vectors by replacing each element in them by its binary Cartesian representation. In 905, an example is provided of a vector xGF(2.sup.2).sup.4 that is converted into a binary vector {tilde over (x)} of length eight bits. Vector x belongs to a coset of C.sup.(1) having syndrome, e.g., s=x.Math.H.sup.T=.sup.2. Indeed, multiplying binary vector {tilde over (x)} with {tilde over (H)}.sup.T generates syndrome .sup.2's binary representation [1 1].
[0104] Reference is made to
[0105] In 1001, an input is defined by log-likelihoods, e.g., log.sub.2(Pr(Y.sub.j=y.sub.j|X.sub.j=i)), where y.sub.j is a j.sup.th measurement received over a communication channel, and i is an ordinal notation of the field elements (e.g., specified in 902 in round brackets). An output may be the four values of g({tilde over (s)}) that equal (e.g., up to a multiplicative constant) the log likelihoods of cosets of C.sup.(2). As shown in 1002, the summation (inside the logarithm) is performed over all vectors in the GF(2.sup.2).sup.4 field having their syndrome equal to sGF(2.sup.2) which have a binary representation {tilde over (s)} (the argument of the function). In 1003, the calculations of the Hadamard transform of size 4 are defined in terms of operations and . Function .sub.j({tilde over (b)})=Pr(Y=y.sub.j|X.sub.j=b) is transformed to F.sub.j() where {tilde over (b)} is the binary representation of b in GF(2.sup.2). In 1004, a fast Hadamard transform may be performed using AddSub circuits 1005, e.g., as specified in
[0106] Reference is made to
[0107] A first set of circuits 1101 may transform log-likelihoods {LL.sub.i,j}=.sub.i=0,1,2,3 to log of the Hadamard transforms, denoted {.sub.i,j}.sub.i=0,1,2,3 for j=0,1,2,3 defined in 1003. The convolution property in an initial space in equation (4), simplified to a multiplication property in the transformed space in equation (11), may be implemented by additions in the log-domain transform space represented by 1102. The following may be computed according to property 208 of
[0108] Operating in the log-domain, multiplications are replaced by additions, which may result in 1102. In the log-domain, addition of two numbers a and b, e.g., represented by their signs and magnitudes, may be performed according to the following formulae:
a+b=c
mag(c)=mag(a)+mag(b);sign(c)=xor(sign(a),sign(b)).(17)
[0109] Finally, in 1103, an inverse Hadamard transform may be performed. The inverse Hadamard transform may be performed, for example, by entering {}).sub.i=0,1,2,3 elements subtracted by 2 (corresponding to dividing by 4 in the log.sub.2 () domain).
Example II
[0110] Reference is made to
[0111] In 1202, the field GF(2.sup.2) may be defined as in Example I (e.g., in 902 of
[0112] Reference is made to
[0113] In 1301, an input is defined by log-likelihoods log.sub.2(Pr(Y.sub.j=y.sub.j|X=i)), where y.sub.j is a j.sup.th measurement received over a communication channel, and i is an ordinal notation of the field elements (e.g., specified in 1202 in round brackets). An output may be the four values of g({tilde over (s)}) that equal (e.g., up to a multiplicative constant) the log likelihood of the cosets of C.sup.(2) in C.sup.(1). As shown in 1302, the summation (inside the logarithm) is performed over all vectors in GF(2.sup.2).sup.4 having their syndrome equal to sGF(2.sup.2).sup.2 that have a binary representation {tilde over (s)} (the argument of the function). There are four possible cosets syndromes in {0}.sup.2{0,1}.sup.2, e.g., of the form [0 0 b.sub.0 b.sub.1], where b.sub.0, b.sub.1 {0,1}. In 1303, a first set of circuits may transform log-likelihoods {LL.sub.i,j}.sub.i=0,1,2,3 to {.sub.i,j}.sub.i=0,1,2,3 for j=0,1,2,3. Using the convolution property, e.g., equation (11), may be represented by an addition property in the log-domain transform space represented by 1304. The following may be computed according to property 208 of
[0114] Operating in the log-domain, multiplications are replaced by additions, which may result in 1304. Performing the inverse Hadamard transform on all 16 values of .sub.i,j may result in the log-likelihood of all of the cosets of C.sup.(2) in GF(2.sup.2).sup.4. The set of candidate values may be reduced to a subset of interesting syndromes S, e.g., representing four cosets in C.sup.(1). As a consequence, operation 804 from
[0115] Reference is made to
[0116] In operation 1400, one or more processor(s) (e.g., receiver processor(s) 120 of
[0117] In operation 1410, the processor(s) may transform one or more symbol probabilities (e.g., equation (1)) from an initial domain to a transformed domain. The symbol probabilities may measure the likelihood that one or more individual symbols of the received error correction codeword were transmitted as one or more symbols in candidate transmitted error correction codewords. The processor(s) may receive or compute those symbol probabilities. The transformation may be an isomorphism, a bijection, a Fourier transform, a fast Fourier transform, a Hadamard transform, a fast Hadamard transform, or any transform in which a convolution operation in the original domain is simplified in the transform domain. In dome embodiments, the transformed domain may be a log domain and the plurality of coset probabilities in operation 1420 may be computed based on log likelihoods.
[0118] In operation 1420, in the transformed domain, the processor(s) may compose a plurality of the transformed symbol probabilities (e.g., as in equation (11) or its log-domain equivalent) to generate a combined coset probability defining the likelihood that the transmitted error correction codeword is associated with the individual symbols belongs to a particular one of a plurality of candidate cosets. The plurality of candidate cosets may be mutually exclusive subsets of a codebook defining all possible values of the error correction codeword. The coset probabilities may be generated for each coset by combining the individual symbol probabilities associated with codewords that have a syndrome associated with the coset. In some embodiments, a reduced subset space of syndrome subset may be considered and the coset probabilities may be generated for the subset space of syndromes. The coset probabilities may be generated in the transformed domain using multiplication or addition computations representing convolution computations in the initial domain. These coset probabilities may be generated using fewer computations than would be used in the initial domain. In one embodiment, the number of computations may be reduced from an order of n.Math.|F|.sup.k.sup.
[0119] In one embodiment, the processor(s) may execute multiple decoder circuits, in series, wherein the output of a first of the multiple decoder circuits is sent in the transformed domain, without inverse transforming the output, to a second of the multiple decoder circuits. The multiple decoder circuits may be used to decode a polar code, wherein the multiple decoder circuits generate multiple coset probabilities for multiple kernels of the polar code, respectively, and the processor(s) combine the multiple coset probabilities to generate a combined coset probability for the entire polar code.
[0120] In operation 1430, the processor(s) may inverse transform a plurality of coset probabilities for the plurality of respective cosets from the transformed domain to the initial domain.
[0121] In operation 1440, in the initial domain, the processor(s) may execute a maximum likelihood coset criterion (e.g., as in equation (3)) to select the candidate coset that has a maximum one of the plurality of coset probabilities. Operation 1440 may reduce the set of the plurality of candidate cosets to a most likely coset in which the transmitted error correction codeword most likely belongs. Operation 1440 may repeat or iterate for a plurality of iterations to execute a sequential coset decoder. A sequential coset decoder may iteratively execute the maximum likelihood coset criterion. In each current iteration, the sequential coset decoder selects a most likely coset of the coset selected in the previous iteration, wherein the most likely coset of the current iteration has a smaller number of candidate error correction codewords than the most likely coset of the previous iteration. The sequential coset decoder may iterate a number of times, for example, until the number of candidate error correction codewords in a final coset generated in a final iteration is below a threshold number and/or until a duration of time estimated for decoding the candidate error correction codewords in a final coset generated in a final iteration is below a threshold duration of time.
[0122] In operation 1450, after selecting a final most likely coset, the processor(s) may execute a maximum likelihood codeword criterion on the selected coset in order to determine a most likely transmitted error correction codeword in the coset selected in operation 1440 that has a maximum likelihood.
[0123] Other operations or orders of operations may be used.
[0124] In accordance with an embodiment of present invention, executed in parallel as used herein may refer to executed substantially concurrently, simultaneously, or during completely or partially overlapping time intervals. For example, operations ab and ab may be executed in parallel as shown in
[0125] In the foregoing description, various aspects of the present invention have been described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to a person of ordinary skill in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well-known features may be omitted or simplified in order not to obscure the present invention.
[0126] Unless specifically stated otherwise, as apparent from the foregoing discussions, it is appreciated that throughout the specification discussions utilizing terms such as processing, computing, calculating, determining, or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
[0127] In accordance with any of the aforementioned embodiments of the invention, systems and methods may be software-implemented using dedicated instruction(s) or, alternatively, hardware-implemented using designated circuitry and/or logic arrays.
[0128] In accordance with any of the aforementioned embodiments of the invention, systems and methods may be executed using an article such as a computer or processor readable non-transitory storage medium, or a computer or processor storage medium, such as for example a memory (e.g. one or more memory unit(s) 122 and 124 of
[0129] Different embodiments are disclosed herein. Features of certain embodiments may be combined with features of other embodiments; thus certain embodiments may be combinations of features of multiple embodiments.
[0130] Although the particular embodiments shown and described above will prove to be useful for the many distribution systems to which the present invention pertains, further modifications of the present invention will occur to persons skilled in the art. All such modifications are deemed to be within the scope and spirit of the present invention as defined by the appended claims.