Efficient implementation of noise whitening post-compensation for narrowband-filtered signals
10826731 ยท 2020-11-03
Assignee
Inventors
Cpc classification
H04B1/1036
ELECTRICITY
International classification
Abstract
Apparatus and methods are provided for noise-whitening post-compensation in a receiver. A first apparatus includes a first whitening filter configured to filter a received signal comprising symbols to generate a first filtered signal. The first apparatus further includes a first decision feedback equalizer having an input coupled to an output of the first whitening filter to receive the first filtered signal. The first decision feedback equalizer is configured to apply decision feedback equalization to the first filtered signal to generate estimates for the symbols of the received signal. A second apparatus includes a decision device configured to generate a symbols decision based on a received signal comprising symbols, a noise predictor configured to predict noise in the received signal, and a subtractor configured to subtract the predicted noise from the received signal to generate a symbols estimate.
Claims
1. An apparatus for noise-whitening post-compensation, the apparatus comprising: a first whitening filter configured to filter a received signal comprising symbols to generate a first filtered signal; a first decision feedback equalizer having an input coupled to an output of the first whitening filter to receive the first filtered signal, the first decision feedback equalizer configured to apply decision feedback equalization (DFE) to the first filtered signal to generate a first symbols estimate for the symbols of the received signal; a first reverser configured to reverse an order of the symbols of the received signal to generate a reversed signal; a second whitening filter having an input coupled to an output of the first reverser to receive the reversed signal, the second whitening filter configured to filter the reversed signal to generate a filtered reversed signal; a second decision feedback equalizer having an input coupled to an output of the second whitening filter to receive the filtered reversed signal, the second decision feedback equalizer configured to apply DFE to the filtered reversed signal to generate a reversed symbols estimate for the symbols of the reversed signal; a second reverser having an input coupled to an output of the second decision feedback equalizer to receive the reversed symbols estimate, the second reverser configured to reverse an order of the symbols of the reversed symbols estimate to generate a second symbols estimate; a first combiner having a first input coupled to an output of the first decision feedback equalizer to receive the first symbols estimate and having a second input coupled to an output of the second reverser to receive the second symbols estimate, the first combiner configured to combine the first symbols estimate and the second symbols estimate to generate a third symbols estimate; a noise predictor having an input coupled to an output of the first combiner to receive the third symbols estimate, the noise predictor configured to predict noise in the received signal; and a first subtractor having an input coupled to an output of the noise predictor to receive the predicted noise, the first subtractor configured to subtract the predicted noise from the received signal to generate a fourth symbols estimate, wherein the noise predictor comprises: a decision device having an input coupled to the output of the first combiner, the decision device configured to generate a symbols decision based on the third symbols estimate; a second subtractor having an input coupled to an output of the decision device to receive the symbols decision, the second subtractor configured to subtract the symbols decision from the received signal to generate noise estimates; and a linear predictive coder having an input coupled to an output of the second subtractor to receive the noise estimates, the linear predictive coder configured to apply linear predictive coding (LPC) to the noise estimates to generate the predicted noise.
2. The apparatus of claim 1, wherein the linear predictive coder is further configured to calculate:
{tilde over (z)}[n]=q.sub.1
3. The apparatus of claim 2, wherein the first whitening filter and the prediction filter are based on same filter taps.
4. The apparatus of claim 1, further comprising: a soft-demapper having an input coupled to an output of the first subtractor to receive the fourth symbols estimate, the soft-demapper configured to soft-demap the fourth symbols estimate to generate a first set of log likelihood ratios (LLRs) for the fourth symbols estimate.
5. The apparatus of claim 4, further comprising: a forward error correction (FEC) decoder having an input coupled to an output of the soft-demapper to receive the first set of LLRs, the FEC decoder configured to FEC decode the first set of LLRs to generate a second set of LLRs.
6. The apparatus of claim 5, further comprising: a regenerator having an input coupled to an output of the FEC decoder to receive the second set of LLRs, the regenerator configured to regenerate the symbols of the received signal using the second set of LLRs.
7. The apparatus of claim 5, further comprising: a decoding loop having an input coupled to the output of the FEC decoder to receive the second set of LLRs, the decoding loop being configured to perform one or more iterations of decoding.
8. The apparatus of claim 7, wherein the decoding loop comprises an output coupled to an input of the first decision feedback equalizer, the first decision feedback equalizer being configured to receive the second set of LLRs.
9. The apparatus of claim 8, wherein the decoding loop comprises an output coupled to an input of an equalizer different from the first decision feedback equalizer, the equalizer different from the first decision feedback equalizer being configured to receive the second set of LLRs.
10. The apparatus of claim 1, wherein the first decision feedback equalizer comprises: a feed-forward filter (FFF) having an input and an output; a second combiner having a first input, a second input, and an output; a first decision device having an input and an output; and a feed-back filter (FBF) having an input and an output, wherein: the input of the FFF is coupled to the output of the first whitening filter to receive the first filtered signal generated by the first whitening filter; the first input of the second combiner is coupled to the output of the FFF to receive a second filtered signal generated by the FFF; the second input of the second combiner is coupled to the output of the FBF to receive a third filtered signal generated by the FBF; the input of the first decision device is coupled to the output of the second combiner to receive a fifth symbols estimate generated by the second combiner; the input of the FBF is coupled to the output of the first decision device to receive a sixth symbols estimate generated by the first decision device; the FFF is configured to feed-forward filter the first filtered signal to generate the second filtered signal; the second combiner is configured to combine the second filtered signal generated by the FFF and the third filtered signal generated by the FBF to generate the fifth symbols estimate; the first decision device is configured to generate the sixth symbols estimate based on the fifth symbols estimate; and the FBF is configured to filter the sixth symbols estimate to generate the third filtered signal.
11. The apparatus of claim 10, wherein the first decision device is a soft decision device configured to generate at least one log likelihood ratio.
12. The apparatus of claim 10, wherein the output of the first decision feedback equalizer is coupled to either: the output of the first decision device, wherein the first symbols estimate comprises the sixth symbols estimate; or the output of the second combiner, wherein the first symbols estimate comprises the fifth symbols estimate.
13. A method for noise whitening post-compensation in a receiver, the method comprising: applying a first whitening filter to a received signal comprising symbols to generate a first filtered signal; applying decision feedback equalization (DFE) to the first filtered signal to generate a first symbols estimate for the symbols of the received signal; reversing an order of the symbols of the received signal to generate a reversed signal; applying a second whitening filter to the reversed signal to generate a filtered reversed signal; applying DFE to the filtered reversed signal to generate a reversed symbols estimate for the symbols of the reversed signal; reversing an order of the symbols of the reversed symbols estimate to generate a second symbols estimate; combining the first symbols estimate and the second symbols estimate to generate a third symbols estimate; predicting noise in the received signal; and subtracting the predicted noise from the received signal to generate a fourth symbols estimate, wherein predicting the noise in the received signal comprises: generating a symbols decision based on the third symbols estimate; subtracting the symbols decision from the received signal to generate noise estimates; and applying linear predictive coding (LPC) to the noise estimates to generate the predicted noise.
14. The method of claim 13, wherein applying LPC to the noise estimates comprises calculating:
{tilde over (z)}[n]=q.sub.1
15. The method of claim 13, wherein applying DFE to the first filtered signal comprises: feed-forward filtering the first filtered signal to generate a second filtered signal; combining the second filtered signal and a third filtered signal to generate a fifth symbols estimate; generating a sixth symbols estimate based on the fifth symbols estimate; and feed-back filtering the sixth symbols estimate to generate the third filtered signal.
16. The method of claim 15, wherein generating the fifth symbols estimate based on the fourth symbols estimate comprises generating a soft symbol decision based on the fourth symbols estimate.
17. The method of claim 16, wherein generating the soft symbols decision based on the fourth symbols estimate comprises generating at least one log likelihood ratio.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments will now be described in detail with reference to the accompanying diagrams, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION
(11) A solution to the aforementioned problem of noise coloring involves post-compensation through the use of a whitening filter followed by a post-compensation (non-linear) equalizer. Post-compensation is shown in
(12)
(13) The received signal in
(14) The linear equalizer 100 equalizes the received signal, which may reduce the associated ISI due to distortion. In some embodiments, the linear equalizer 100 is a 22 MIMO (or Butterfly) equalizer. However, other embodiments may use other types of linear equalizers. In some implementations of
(15) As discussed above, the linear equalizer 100 causes amplification and coloring of the noise in the X polarization component and the Y polarization component. Unless appropriate correction is applied, this amplified and colored noise may significantly degrade the BER performance of the system, requiring a higher SNR (signal-to-noise ratio) to achieve error-free post-FEC decoding.
(16) The signal at the output of the linear equalizer 100 may be denoted by r.sub.p[n], where p refers to the polarization of the signal (X or Y), and n is the symbol index of the received signal. The variable r.sub.p[n] may be expressed as:
r.sub.p[n]=s.sub.p[n]+
(17) Here, s.sub.p[n] is the transmitted symbol and
(18) To address the issue of noise coloring, the post-compensation block 102 is implemented, which includes a filter followed by a post-compensation equalizer. In some embodiments, the filter in the post-compensation block 102 is a whitening filter. The filter in the post-compensation block 102 reduces the coloring of noise in the X polarization component and the Y polarization component.
(19) Following the filtering stage in the post-compensation block 102, the colored noise is whitened. However, the symbols in the signal are now correlated as a result of the filtering. This correlation results in distortion of the X polarization component and the Y polarization component of the signal, leading to ISI. The purpose of the post-compensation non-linear equalizer in the post-compensation block 102 is to equalize the ISI due to this correlation, without amplifying or coloring the whitened noise as a result of filtering. As a result, the overall detection and decoding of the received signal is improved.
(20) In some embodiments, the equalization in the post-compensation block 102 results in no noise enhancement or minimal noise enhancement.
(21) Following the equalization in the post-compensation block 102, the X polarization component and the Y polarization component are forwarded to the FEC decoder 104 for decoding. The FEC decoder decodes the X polarization component and the Y polarization component to produce decoded bits. In some embodiments, both the X and Y polarizations can be jointly encoded at the transmitter and decoded at the receiver. The FEC decoding may correct for detection errors of the received signal.
(22) When using the optional FEC decoding loop 103, the output of the post-compensation block 102 is sent to the FEC decoder 104 as a-priori information. Following an iteration of decoding, the FEC decoder 104 provides output or extrinsic information in terms of log-likelihood ratios (LLRs) to the post-compensation equalizer of the post-compensation block 102 as a-priori information for the next iteration of equalization. This process is known as turbo-equalization.
(23) Post-compensation, such as that performed in the post-compensation block 102, will now be discussed in greater detail.
(24) Referring to
(25) The received symbols in
(26) The filter tap calculation block 200 calculates filter taps, which characterize the quality of the channel, the transmitter and/or the receiver that the signal carrying the received symbols has traversed. This channel, transmitter and/or receiver may be considered an effective channel that the signal carrying the received symbols has traversed. As noted above, this effective channel may include a linear equalizer. The filter taps also represent the noise coloring associated with the received symbols. The filter taps are denoted as g=g.sub.1, g.sub.2 . . . g.sub.M, where M denotes the filter length. The filter taps may also be considered filter coefficients.
(27) In some embodiments, the computation of these filter taps is based on pilot signals, which are signals that are known on the receiver side as well as on the transmitter side. The filter tap calculation may be performed using a variety of methods, including autoregressive spectrum estimation and adaptive methods such as least mean square.
(28) In some implementations of
(29) In some implementations of
(30) Once the filter taps are calculated, they are sent to the filter 202. In some embodiments, the filter 202 is a whitening filter used to whiten the colored noise associated with the received symbols. However, as discussed above, filtering the received symbols may lead to the filtered symbols becoming correlated.
(31) The filtered signal output from the filter 202 is denoted as a.sub.p[n], which may be expressed as:
a.sub.p[n]=.sub.m=1.sup.M g[m] s.sub.p[nm]+{circumflex over (z)}[n].(2)
(32) Here, {circumflex over (z)}[n] is the white additive noise produced by filtering the colored noise
(33) The equalizer 204 may be considered a post-compensation equalizer, a non-linear equalizer, a post-compensation non-linear equalizer, or a second stage equalizer. Several different equalization methods may be used by the equalizer 204. In one example, the equalizer 204 uses a BCJR (Bahl, Cocke, Jelinek and Raviv) method, which is the optimal symbol by symbol detector. In another example, the equalizer 204 uses a maximum likelihood sequence estimation (MLSE) method, such as a Viterbi algorithm or a soft output Viterbi algorithm (SOYA). The MLSE method is the optimal sequence detector. However, both the BCJR and MLSE methods may suffer from a large computation complexity.
(34) The BCJR and MLSE methods, as well as the modified versions of the BCJR and MLSE methods including reduced-state and bi-directional methods, consist of a trellis structure. The computation complexity of the trellis increases exponentially as the symbol constellation size (e.g., QPSK (quadrature phase shift keying), 64-QAM (quadrature amplitude modulation)) and the filter length increases. In addition, the trellis may introduce significant delays. These issues make the BCJR and MLSE methods difficult to implement in hardware when resources are limited, even with parallelization of the trellis structures. Resources that are limited in hardware equalizer implementations may include power, cooling capacity and number of gates.
(35) The computational complexity and delays associated with the BCJR and MLSE methods are compounded when the post-compensation equalizer is implemented inside of a decoding loop, such as the FEC decoding loop 103 illustrated in
(36) Aspects of the present disclosure relate to post-compensation methods and post-compensators that can perform noise-whitening and post-compensation equalization with comparable BER performance to the optimal method using fewer resources. According to some of these aspects, the computational complexity and delay of the post-compensation methods increases linearly with symbol constellation size and whitening filter length. In addition, the post-compensation methods may be implemented in combination when better equalizer performance is required, or separately when a low computational complexity is required.
(37) In some embodiments of the present disclosure, decision feedback equalization (DFE) is used as a method for post-compensation equalization. DFE is a non-linear equalization method that may exhibit improved BER performance compared to linear equalizers. DFE may be especially useful with severely distorted channels, for example, when the roots of the Z-transform of the channel impulse response are close to the unit circle. The computational complexity of the DFE method is also relatively low compared to the BCJR and MLSE methods. For example, the computational complexity of the DFE method increases linearly with symbol constellation size and whitening filter length.
(38) Referring to
(39) In
(40) The transmitted symbols s may be received from a linear equalizer, such as linear equalizer 100, and are input into the whitening filter 300. Although not shown in
(41) Similar to equation (2) discussed above, the filtered signal a illustrated in
a.sub.p[n]=.sub.m=1.sup.M g[m] s.sub.p[nm]+{circumflex over (z)}[n]=Gs.sub.p+{circumflex over (z)}.(3)
(42) The DFE method processes the filtered signal a and provides the decided symbols . The DFE method includes two main filters: FFF 302 and FBF 306. Both filters may be optimized based on the minimum mean squared error (MMSE) criterion. The FFF 302 processes filtered signal a, and the FBF 306 forms a weighted linear combination of the previous symbol decisions by decision device 304. The FBF 306 then cancels the ISI caused by the previous symbols from the output of the FFF 302 to produce the estimated symbols
(43) Here, f[k] and b[k] are the time-domain representations of F(z)=.sub.k f[k]z.sup.k, and B(z)=.sub.k b[k]z.sup.k, respectively. L.sub.F and L.sub.B denote the number of taps for the FFF and FBF, respectively. Solving equation (4) for f and b, the following equations are produced:
f=((.sub.ggD D.sup.H)+.sup.2I).sup.1 g; and(5)
b=D.sup.H f.(6)
(44) Here, D is the convolution matrix of the filter g, .sub.gg is the autocorrelation matrix of the filter taps g, .sup.2 is the noise variance per real dimension and l is the identity matrix. In some implementations of
(45) A problem associated with the implementation of the DFE method is the inverse matrix computation required in equation (5). However, this computation may be simplified by noting that the matrix, =(.sub.ggD D.sup.H)+.sup.2l, is a positive definite matrix. Therefore, may be factorized into =L*L, using Cholesky decomposition, where L is a lower triangular matrix. Therefore, .sup.1=(L*L).sup.1=L.sup.1L.sup.1. Since L is a lower triangular matrix, the computation of the inverse of L can be implemented efficiently using row transformation operations.
(46) The decision device may be implemented in whole or in part in hardware, firmware, one or more components that execute software, or some combination thereof. According to some embodiments of
(47) According to other embodiments, the decision device 304 is a soft decision device. A soft decision device requires knowledge of the noise variance in a signal and provides a better estimate of the decided symbols. A soft decision device may reduce the effect of error propagation in DFE compared to a hard decision device. In these embodiments, decided symbols are soft decision symbols. The soft decision device generates the soft decision symbols based on two steps. In the first step, the soft decision device computes the log likelihood ratio (LLR) .sub.b of the bits constituting the estimated symbols
(48)
(49) In some embodiments, the implementation of the tanh function is performed using a look-up table consisting of relatively few elements by exploiting the following properties of the tanh function: a) The tanh function has odd symmetry, thus tanh(x)=tanh(x); and b) The tanh function approaches 1 as x approaches infinity, thus tanh(x)1, for x3.
(50) When larger constellations are used, such as higher order QAM constellations, the computation of LLRs may become more computationally complex. However, some aspects of the present disclosure use simplified formulas for the computation of LLRs to reduce complexity, as described below.
(51)
(52) For each dimension of the 16 QAM constellation illustrated in
(53)
(54) Here, d denotes the real or imaginary component of estimated symbols
(55)
(56) Although equations (9) to (13) do not represent an exact computation of LLRs, these equations may be less computationally complex than the exact calculation. The approximation of the tanh function described above may also be implemented in equations (11) and (12) to improve the speed and efficiency of the soft symbol decision device.
(57) In further embodiments of the present disclosure, linear predictive coding (LPC) is used as a method for post-compensation equalization. LPC is the process of predicting the value of a sample based on a linear combination of past samples. The noise associated with the symbols received from a linear equalizer is correlated, and exploiting this noise correlation may lead to additional improvements in the operation of a post-compensation equalizer. In other words, the LPC method may be implemented to predict noise sample associated with a current received symbol based on the noise samples associated with past symbols.
(58) Mathematically, predicted noise samples {tilde over (z)} that are provided using the LPC method may be expressed as:
{tilde over (z)}[n]=q.sub.1
(59) Here, q=q.sub.1, q.sub.2 . . . q.sub.M represents a prediction filter, and e represents a prediction error. In some embodiments, the prediction filter that minimizes the mean square error (MSE) corresponds to the filter taps g discussed above, with the first tap set to zero, i.e., g(1)=0. In these embodiments, equation (14) may be expressed as:
{tilde over (z)}[n]=q.sub.1
(60) The computational complexity associated with the implementation of the LPC method may be relatively low. For example, the LPC method is independent of constellation size and is linearly dependent on the length of the prediction filter. In some implementations, the length of the prediction filter may be 35. One disadvantage of the LPC method is that it adds prediction error to the predicted noise samples. However, in some implementations of the LPC method the noise reduction due to subtracting the predicted noise samples is greater than the added prediction error, and therefore the LPC method is beneficial.
(61) Referring to
(62) In
(63) The decided symbols are subtracted from the received signal by subtractor 502 to give an estimate of the noise associated with the received symbols r. The estimate of the noise associated with the received symbols r is the colored additive noise
(64) In some embodiments of the present disclosure, a combination of the DFE and LPE methods are used for post-compensation equalization. For example, the DFE method followed by the LPC method may be used for post-compensation equalization.
(65) Referring to
(66) The received signal r is sent to the whitening filter 602, which sends its output to the DFE block 606, producing the estimated symbols
(67) The received signal r is an equalized signal with colored noise, and is a combination of the transmitted symbols s and the colored additive noise
(68) The combination of the whitening filter 602 and the DFE block 606 operates in a similar manner to the post-compensator illustrated in
(69) In
(70) The estimated symbols
(71) Following the generation of the estimated symbols
(72) The LPC block 614 performs the LPC method on the noise estimates. The LPC block 614 may operate in a similar manner to the LPC block 504 illustrated in
(73) The post-compensator of
(74) In some embodiments of the present disclosure, the post-compensators illustrated in
(75) In a turbo-equalization loop, the computational complexity and delay associated with the post-compensation equalizers may form a bottle-neck issue from an implementation point of view. According to some embodiments of the present disclosure, a low complexity implementation of a post-compensation equalizer that is suitable for turbo-equalization loops is provided, which produces a BER performance comparable to the BCJR and MLSE methods.
(76) Referring to
(77) In
(78) The received signal in
(79) The filter tap calculation block 700 of
(80) After the filter taps are calculated, the post-compensator of
(81) Following the filter 702 in the first iteration, post-compensation equalization is performed in the DFE/LPC block 704, which may use the DFE method, the LPC method, or a combination of the DFE method and the LPC method. In some implementations, post-compensation equalization is performed in the DFE/LPC block 704 using a method similar to the method illustrated in FIG. 6. In other implementations, the DFE/LPC block 704 does not use the LPC method at all. In these implementations, post-compensation equalization may be performed in the DFE/LPC block 704 using the post-compensation equalization method illustrated in
(82) The DFE/LPC block 704 produces an estimate of the specific symbol that is then sent to the LLR calculation block 705. The calculation block 705 then computes an LLR value (or LLR values) for the estimate of the specific symbol. This LLR value is then sent to the scaling block 708, which multiplies the LLR value by a scaling factor. Following the scaling block 708, the LLR value is then sent to the FEC decoder 712, which recovers the transmitted codeword. The scaling block 708 may be implemented, for example, if the FEC decoder 712 has a limited fixed point precision. It should be noted that the LPC block 706 is bypassed in the first iteration of the turbo-equalization loop.
(83) In the second iteration of the turbo-equalization loop of
(84) The updated LLRs of the soft symbol estimates that are sent to the FEC decoder 712 in the second iteration may be more accurate than the LLRs of the soft symbol estimate sent to the FEC decoder 712 in the first iteration. The post-equalization method of
(85) Although the LLR calculation blocks 705 and 707 are described as producing LLR values, in general, the LLR calculation blocks 705 and 707 may produce any appropriate reliability measure for the estimated symbols.
(86)
(87) The example operations 800 are illustrative of an example embodiment. Various ways to perform the illustrated operations, as well as examples of other operations that may be performed, are described herein. Further variations may be or become apparent.
(88) For example, in some embodiments, applying DFE in block 804 may include feed-forward filtering the first filtered signal to generate a second filtered signal; combining the second filtered signal and a third filtered signal to generate a first symbols estimate; generating a second symbols estimate or decision based on the first symbols estimate; and feed-back filtering the second symbols estimate or decision to generate the third filtered signal. In further embodiments, generating the second symbols estimate or decision based on the first symbols estimate includes generating a soft symbols decision based on the first symbols estimate. In some embodiments, generating the soft symbols decision based on the first symbols estimate includes generating at least one log likelihood ratio.
(89) In some embodiments, the example operations 800 also include reversing an order of the symbols of the received signal to generate a reversed signal; applying a whitening filter to the reversed signal to generate a filtered reversed signal; applying DFE to the filtered reversed signal to generate a reversed symbols estimate for the symbols of the reversed signal; reversing an order of the symbols of the reversed symbols estimate to generate a third symbols estimate; combining the second symbols estimate and the third symbols estimate to generate a fourth symbols estimate; and generating a symbols decision based on the fourth symbols estimate
(90) In some embodiments, block 806 includes subtracting the symbols decision from the received signal to generate noise estimates; and applying LPC to the noise estimates to generate the predicted noise. In further embodiments, applying LPC includes calculating:
{tilde over (z)}[n]=q.sub.1
(91) where {tilde over (z)} denotes the predicted noise,
(92) In some embodiments, the example operations 800 also include regenerating the symbols of the received symbol using the second set of LLRs.
(93) In some embodiments, the decoding loop indicated at 814 includes one or more iterations of a decoding loop that sends the second set of LLRs as a-priori information for a subsequent iteration of equalization different from the DFE applied to the first filtered signal.
(94)
(95) The example operations 900 are illustrative of an example embodiment. Various ways to perform the illustrated operations, as well as examples of other operations that may be performed, are described herein. Further variations may be or become apparent.
(96) For example, in some embodiments, the block 902 includes generating a soft symbol decision based on the received symbol. In other embodiments, the block 902 includes generating a hard symbol decision based on the received symbol.
(97) In some embodiments, block 904 includes subtracting the symbols decision from the received signal to generate noise estimates and applying LPC to the noise estimates to generate the predicted noise. In further embodiments, applying LPC includes calculating:
{tilde over (z)}[n]=q.sub.1
(98) where {tilde over (z)} denotes the predicted noise,
(99) In some embodiments, the example operations 900 also include regenerating the symbols of the received symbol using the second set of LLRs.
(100) In some embodiments, the decoding loop indicated at 912 includes one or more iterations of a decoding loop that sends the second set of LLRs as a-priori information for a subsequent iteration of equalization different from LPC.
(101) What has been described is merely illustrative of the application of the principles of the disclosure. Other arrangements and methods can be implemented by those skilled in the art without departing from the spirit and scope of the present disclosure.
(102) Hardware implementations of any block, module, component, or device exemplified herein may include electrical or optical circuitry, such as integrated circuits, printed circuit boards, discrete circuits, analog circuits, digital circuits and any combination thereof.
(103) Moreover, any block, module, component, or device exemplified herein may include software or firmware that executes instructions, and may include or otherwise have access to a non-transitory computer/processor readable storage medium or media for storage of information, such as computer/processor readable instructions, data structures, program modules, and/or other data. A non-exhaustive list of examples of non-transitory computer/processor readable storage media includes magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, optical disks such as compact disc read-only memory (CD-ROM), digital video discs or digital versatile disc (DVDs), Blu-ray Disc, or other optical storage, volatile and non-volatile, removable and nonremovable media implemented in any method or technology, random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology. Any such non-transitory computer/processor storage media may be part of a device or accessible or connectable thereto. Any application or module herein described may be implemented using computer/processor readable/executable instructions that may be stored or otherwise held by such non-transitory computer/processor readable storage media.