Method and System for Decoding a Signal at a Receiver in a Multiple Input Multiple Output (MIMO) Communication System
20240187049 ยท 2024-06-06
Inventors
Cpc classification
H04B7/0456
ELECTRICITY
International classification
Abstract
A method and an apparatus for decoding a signal at a receiver in a MIMO communication system is described. A signal y is obtained over a channel from a plurality of transmitters in communication with the receiver, the signal y includes data signals transmitted on a plurality of layers N. A concatenated matrix R representing the channel between the plurality of transmitters and the receiver is obtained based on an estimated channel matrix H. An ordered list is determined based at least on the signal y and the obtained concatenated matrix R. The ordered list is a list of N-dimensional vectors and each vector is a candidate constellation point for the transmitted data signal based on a predefined metric, and is determined using a list search block configured to implement a machine learning algorithm.
Claims
1. A receiver of a multiple input multiple output, communication system, the communication system comprising a plurality of transmitters, and a communication channel, the receiver comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor; cause the receiver to perform: obtaining a signal y over the channel from the plurality of transmitters in communication with the receiver, wherein the signal y comprises data signals transmitted on a plurality of layers N; obtaining a concatenated matrix R, representing the channel between the plurality of transmitters and the receiver, wherein the concatenated matrix R is obtained based on an estimated channel matrix H; and determining an ordered list , based at least on the signal y and the obtained concatenated matrix R, wherein the ordered list is a list of N-dimensional vectors and the vector is a candidate constellation point for the transmitted data signal based on a predefined metric; wherein the ordered list
comprises a list search block wherein the instructions, when executed with the at least one processor, implement a machine learning algorithm.
2. The receiver according to claim 1, wherein the instructions, when executed with the at least one processor, cause the receiver to obtain the concatenated matrix R with a QR decomposition of the estimated channel matrix H.
3. The receiver according to claim 1, wherein the instructions, when executed with the at least one processor, cause the receiver to perform: determining an ordered list .sub.N, at a root layer of the plurality of layers N; and determining a plurality of ordered lists
.sub.l for the other layers of the plurality of layers N, wherein the ordered list
.sub.l being determined is based on the ordered list
.sub.l+1, here l=N?1, N?2, . . . , 1.
4. The receiver according to claim 1, wherein the instructions, when executed with the at least one processor, cause the receiver to perform: obtaining one or more input parameters, wherein the one or more input parameters comprise at least an N.sup.th element of the signal y, a constellation size M, and a number of surviving candidates K.sub.N; determining a plurality of partial symbol vectors, wherein the determined partial symbol vectors are the K.sub.N surviving candidates for the root layer, based at least on the one or more input parameters; determining a first smallest possible window using a trained machine language model, wherein the first smallest possible window comprises a plurality of constellation points K.sub.N for a signal y derived from N.sup.th element of the signal y, wherein the plurality of constellation points K.sub.N comprise the K.sub.N closest constellation points to the signal y; determining partial Euclidean distances between the signal y and the plurality of constellation points K.sub.N in the first smallest possible window; and sorting the plurality of constellation points K.sub.N based on the determined partial Euclidean distances, and thereby determining me oraerea list .sub.N.
5. The receiver according to claim 1, wherein the determined partial Euclidean distances associated with the plurality of constellation points K.sub.N in the ordered list .sub.N, are stored in a list
.sub.N.
6. The receiver according to claim 1, wherein the first smallest possible window for the K.sub.N surviving candidates is represented with a class indicated with integers L.sub.y, R.sub.y, B.sub.y, and T.sub.y, wherein L.sub.y represents the number of constellation points to the left of the closest constellation point in the smallest possible window, R.sub.y represents the number of constellation points to the right of the closest constellation point in the smallest possible window, B.sub.y represents the number of constellation points below the closest constellation point in the smallest possible window, and T.sub.y represents the number of constellation points above the closest constellation point in the smallest possible window.
7. The receiver according to claim 1, wherein the instructions, when executed with the at least one processor, cause the receiver to perform: obtaining one or more input parameters, wherein the one or more input parameters comprise at least an l.sup.th element of the signal y, the constellation size M, and a number of surviving candidates K.sub.l for layer l, where 1=N?1, N?2, . . . ,1; obtaining an ordered list .sub.l and a list
.sub.l from the surviving candidate in the ordered list
.sub.l+1, where l=N?1, N?2, . . . , 1; determining a closest constellation point for the surviving candidate in the ordered list
.sub.l+1; determining a partial symbol vector for the determined closest constellation point; determining partial Euclidean distances, based on the determined partial symbol vectors; sorting the ordered list
.sub.l+1 wherein the ordered list
.sub.l+1 is sorted in ascending order of determined partial Euclidean distances; determining a smallest possible window, using the trained machine language model, for the element in the sorted ordered list
.sub.l+1, wherein the smallest possible window comprises a plurality of constellation points K.sub.l,i, where i=1, 2, . . . , K.sub.l+1; determining partial Euclidean distances, between a function of the l.sup.th element of the signal y and the plurality of constellation points K.sub.l,i; and sorting the plurality of constellation points K.sub.l,i in the smallest possible window based on the determined partial Euclidean distances, and thereby determining the ordered list
.sub.l.
8. The receiver according to claim 1, wherein the determined partial Euclidean distances associated with the plurality of constellation points K.sub.l,iin the ordered list .sub.l, is stored in a list
.sub.l, where l=N?1, N?2, . . . , 1.
9. The receiver according to any one of the preceding claims claim 1, wherein the smallest possible window for the K.sub.l surviving candidates is represented with a class indicated with integers L.sub.y, R.sub.y, B.sub.y, and T.sub.y. wherein L.sub.y represents a number of constellation points to the left of the closest constellation point in the smallest possible window, R.sub.y represents a number of constellation points to the right of the closest constellation point in the smallest possible window, B.sub.y represents a number of constellation points below the closest constellation point in the smallest possible window, and T.sub.y represents a number of constellation points above the closest constellation point in the smallest possible window.
10. The receiver according to claim 1, wherein the instructions, when executed with the at least one processor, cause the receiver to perform log-likelihood ratio computation on the determine .
11. The receiver according to claim 1, wherein the instructions, when executed with the at least one processor, cause the receiver to perform pre-processing the signal y using one or more pre-processing techniques, wherein the one or more pre-processing techniques comprise at least noise-whitening technique and QR decomposition technique.
12. The receiver according to claim 1, wherein the instructions, when executed with the at least one processor, cause the receiver to perform training the machine language model based at least on a training data set with input features which are functions of a one-dimensional complex-valued signal y, the constellation size M, and the number of surviving candidates K.sub.N.
13. A multi-user multiple input multiple output communication system, the multi-user multiple input multiple output communication system comprising a plurality of transmitters, a receiver, and a multiple input multiple output channel, wherein the receiver is implemented according to claim 1.
14. A method for decoding a signal y at a receiver in a multiple input multiple output communication system, the method comprising: obtaining a signal y over a channel from a plurality of transmitters in communication with the receiver, wherein the signal y comprises data signals transmitted on a plurality of layers N; obtaining a concatenated matrix R, representing the channel between the plurality of transmitters and the receiver, wherein the concatenated matrix R is obtained based on an estimated channel matrix H; and determining an ordered list , based at least on the signal y and the obtained concatenated matrix R, wherein the ordered list is a list of N-dimensional vectors and the vector is a candidate constellation point for the transmitted data signal based on a predefined metric, wherein determining the ordered list
comprises a list search block configured to implement a machine learning algorithm.
15. The method of claim 14, wherein the concatenated matrix R is obtained with a QR decomposition of the estimated channel matrix H.
16. The method of claim 14, wherein determining the ordered list further comprises: determining an ordered list
.sub.N, at a root layer of the plurality of layers N; and determining a plurality of ordered lists
.sub.l for other layers of the plurality of layers N, wherein the ordered list
.sub.l being determined is based on the ordered list
.sub.l+1, where l=N?1, N?2, . . , 1.
17. The method of claim 14, wherein determining the ordered list .sub.N, at the root layer, comprises: obtaining one or more input parameters, wherein the one or more input parameters comprise at least an N.sup.th element of the signal y, a constellation size M, and a number of surviving candidates K.sub.N; determining a plurality of partial symbol vectors, wherein the determined partial symbol vectors are the K.sub.N surviving candidates for the root layer, based at least on the one or more input parameters; determining a first smallest possible window using a trained machine language model, wherein the first smallest possible window comprises a plurality of constellation points K.sub.N for a signal y derived from N.sup.th element of the signal y, wherein the plurality of constellation points K.sub.N comprise the K.sub.N closest constellation points to the signal y; determining partial Euclidean distances between the signal y and the plurality of constellation points K.sub.N in the first smallest possible window; and sorting the plurality of constellation points K.sub.N based on the determined partial Euclidean distances, and thereby determining the ordered list
.sub.N.
18. The method of claim 14, wherein determining the plurality of ordered lists .sub.l, for the other layers of the plurality of layers, comprises: obtaining one or more input parameters, wherein the one or more input parameters comprise at least an l.sup.th element of the signal y, a constellation size M, and a number of surviving candidates K.sub.l for layer l, where l=N?1, N?2, . . . , 1; obtaining an ordered list
.sub.l and a list
.sub.l from the surviving candidate in the ordered list
.sub.l+1, where l=N?1, N?2, . . , 1; determining a closest constellation point for the surviving candidate in the ordered list
.sub.l+1; determining a partial symbol vector for the determined closest constellation point; determining partial Euclidean distances based on the determined partial symbol vectors; sorting the ordered list
.sub.l+1 wherein the ordered
.sub.l+1 is sortea in ascenaing order of determined partial Euclidean distances; determining a smallest possible window, using the trained machine language model, for the element in the sorted ordered list
.sub.l+1, wherein the smallest possible window comprises a plurality of constellation points K.sub.l,i, where i=1, 2, . . . , K.sub.l+1; determining partial Euclidean distances between a function of the l.sup.th element of the signal y and the plurality of constellation points K.sub.l,i; and sorting the plurality of constellation points K.sub.l,i in the smallest possible window based on the determined partial Euclidean distances, and thereby determining the ordered list
.sub.l.
19. The method of claim 14, wherein the receiver is configured to train the machine language model at least on a training data set with input features which are functions of a one-dimensional complex-valued signal y, the constellation size M, and the number of surviving candidates K.sub.N.
20. A non-transitory program storage device readable with an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing operations for a receiver of a multiple input multiple output communication system, the operations including: obtaining a signal y over a channel, from a plurality of transmitters in communication with the receiver, wherein the signal y comprises data signals transmitted on a plurality of layers N; obtaining a concatenated matrix R, representing the channel between the plurality of transmitters and the receiver, wherein the concatenated matrix R is obtained based on an estimated channel matrix H; and determining an ordered list , based at least on the signal y and the obtained concatenated matrix R, wherein the ordered list is a list of N-dimensional vectors and the vector is a candidate constellation point for the transmitted data signal based on a predefined metric, wherein the determining the ordered list
comprises a list search block configured to implement a machine learning algorithm.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0058] Further embodiments, details, advantages, and modifications of the present embodiments will become apparent from the following detailed description of the embodiments, which is to be taken in conjunction with the accompanying drawings, wherein:
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
DETAILED DESCRIPTION
[0071] Some embodiments of this disclosure, illustrating its features, will now be discussed in detail. The words comprising, having, containing, and including, and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to the listed item or items.
[0072] It should also be noted that as used herein and in the appended claims, the singular forms a, an, and the include plural references unless the context clearly dictates otherwise. Although any apparatus and method similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the apparatus and methods are now described.
[0073] Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings in which like numerals represent like elements throughout the several figures, and in which embodiments are shown. Embodiments of the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples among other possible examples.
[0074] An embodiment of the present disclosure and its potential advantages are understood by referring to
[0075]
[0076] Each one of the UE transmitter 202 may comprise at least one transmitting antenna and the base station MIMO receiver 204 may comprise at least one receiving antenna. In one example, each one of the UE transmitter 202 may comprise multiple transmitting antennas and the base station MIMO receiver 204 may comprise multiple receiving antennas. It should be noted that the UE transmitter 202 may be referred to as and/or may include some or all of the functionality of a user equipment (UE), mobile station (MS), terminal, an access terminal, a subscriber unit, a station, etc. Examples of the UE transmitter 202 may include, but are not limited to, cellular phones, smartphones, personal digital assistants (PDAs), wireless devices, electronic automobile consoles, sensors, or laptop computers. It should be noted that the base station MIMO receiver 204 maybe hereinafter referred to as a base station. In one embodiment, the base station may serve the UEs.
[0077] Further, each one of the UE transmitter 202 may communicate with the base station MIMO receiver 204, via a channel 206. In one embodiment, the channel 206 may be a wireless MIMO channel. Additionally, the channel 206 between the UE transmitter 202 and the base station MIMO receiver 204 may have a status or a state. Further, the status of the channel 206 may vary over time and may be described by one or more properties of the channel 206. It should be noted that properties of the channel 206 may, for example, comprise a channel gain, a channel phase, a signal-to-noise ratio (SNR), a received signal strength indicator (RSSI), or a transfer matrix. In one embodiment, the channel 206 may corrupt the signal being transmitted over the channel 206.
[0078] It will be apparent to one skilled in the art that the above-mentioned components of the MU-MIMO communication system 200 have been provided only for illustration purposes. The MU-MIMO communication system 200 may include a plurality of receivers as well, without departing from the scope of the disclosure.
[0079] Referring to
[0080] Further, the data stream 208 from each of the plurality of UE may undergo a channel coding 210 of data streams 208. Additionally, the plurality of user equipment (UE) UE.sub.1, UE.sub.2, . . . UE.sub.N.sub.
[0081] In one embodiment, UE.sub.k transmits N.sub.t.sup.(k) number of modulated data streams. It should be noted that the data streams 208 may be referred to as layers. In one embodiment, at least two data streams may be received at the base station MIMO receiver 204, which may be from the same UE or different UE. Further, the data stream of UE.sub.k may be represented by s.sub.k?Q.sup.N.sup.
[0082] Based on the received data streams 208, the base station MIMO receiver 204 may send a signal y.sub.r streams to a joint receiver and equalizer 214. Further, the joint receiver and equalizer 214 may receive a channel estimation signal from a channel estimator 216. Successively, the joint receiver and equalizer 214 may process the signal y.sub.r and the channel estimation signal. In one embodiment, y.sub.r may be represented as
y.sub.r=Hs+n.sub.int+n.sub.AWGN [0083] where y.sub.r?.sup.N.sup.
.sup.N.sup.
[0084] Gaussian noise at the base station MIMO receiver 204, pint is the co-channel interference from other cells, and s=[s.sub.1.sup.T, s.sub.2.sup.T, . . . , s.sub.N.sub.
[0085] Further, the processed information may be sent to a channel decoder 218. It should be noted that the processed information may be referred to as soft information. In one embodiment, the soft information may be based on the received signal y.sub.r. The channel decoder 218 may classify the data related to each user equipment UE.sub.1, UE.sub.2, . . . UE.sub.N.sub.
[0086] It will be apparent to one skilled in the art that above-mentioned uplink transmission scenario in a massive MIMO communication system 200 has been provided only for illustration purposes. In one embodiment, additional impairments may be added on top of this scenario due to hardware, without departing from the scope of the disclosure.
[0087]
[0088] At first, the joint receiver and equalizer 214 may receive the signal y.sub.r and a channel estimation signal H.sub.est. In one embodiment, y.sub.r and H.sub.est may be received by the pre-processing block 302. In one embodiment, the plurality of user equipment (UE) UE.sub.1, UE.sub.2, . . . UE.sub.N.sub.
[0089] At the pre-processing block 302, N.sub.r is the number of received inputs at the base station MIMO receiver 204 and N.sub.t.sup.(k) is the number of transmit antennas at UE.sub.k, k=1,2, . . . , N.sub.u. In one embodiment, the pre-processing stage may comprise of noise-whitening and QR decomposition. At first, the noise-whitening of the signal y.sub.r may suppress the effects of unwanted interference in the signal y.sub.r. In one embodiment, noise-whitening may whiten the interference-cum-noise associated with the signal y.sub.r, which may require an estimation of Interference Covariance Matrix, denoted as:
C={(y.sub.r?Hs)(y.sub.r?Hs).sup.H},
where C is obtained by averaging over pilot/reference symbol locations within a code block transmitted by the UEs. Further, noise whitening may comprise, performing Cholesky decomposition, denoted by C=LL.sup.H. Thereafter, the received signal is multiplied by L.sup.?1 to provide an effective received signal
[0090] Thereafter, QR decomposition of .sup.N.sup.
.sup.N?N is an upper-triangular matrix. Further, received signal y may be computed, y
Q.sup.H
y=Rs+n,y?.sup.N?1, R?
.sup.N?N, s?Q.sup.N?1, n?
.sup.N?1
where n=Q.sup.H
Q{a+jb, a,b=??{square root over (M)}+1, ??{square root over (M)}+3, . . . , ?{square root over (M)}?3, ?{square root over (M)}?1}, j=?{square root over (?1)}.
[0091] As shown in
[0092] Further, the output of the LSB 304 may include an ordered list . The ordered list
may be defined as:
={s.sub.i?Q.sup.N?1, i=1,2, . . . , K|d(s.sub.1)?d(s.sub.2)?, . . . , ?d(s.sub.K)}
where ?s?, ??.Math.
, d(s)?d(?), d(s)
?y?Rs?, the K vectors belonging to Q.sup.N?1 with the least distance metrics given by d(s), and if d(s.sub.j)=d(s.sub.k), considering s.sub.j<s.sub.k if j<k. In one embodiment, for a received signal y
[y.sub.1, y.sub.2, . . . , y.sub.N].sup.T, the associated vector s
[s.sub.1, s.sub.2, . . . , s.sub.N].sup.T, and r.sub.i,j denoting the (i, j).sup.th element of R. It should be noted that R is based on a channel estimate. Further, a partial received vector may be represented by y.sub.i:N
[y.sub.i, . . . , y.sub.N].sup.T and a partial symbol vector may be represented by s.sub.i:N
[s.sub.1, . . . , s.sub.N].sup.T, where 1?i?N. Furthermore, R.sub.i:N denote the submatrix of R representing rows i to N, columns i to N and r.sub.i:N
[r.sub.i,i, r.sub.i+1, . . . , r.sub.i,N] denotes the sub-vector of row i of R consisting of elements from columns i to N. Thus, a partial Euclidean distance (PED) at Layer i is denoted by:
?(s.sub.i:N)=?y.sub.i:N?R.sub.i:Ns.sub.i:N?.
which follows d(s)=?(s)
?.sup.2(s.sub.i:N)=?.sup.2(s.sub.i+1:N)+?y.sub.i?r.sub.i,is.sub.i?rs.sub.i+1:Nss.sub.i+1:N?.sup.2
[0093] It should be noted that the ordered list may comprise constellation symbols with a smallest PEDs for a target layer. Further, the layers may be classified into a root layer and remaining layers. In one embodiment, for 10 users with one transmitting antenna each, the number of layers may be 10. Thus among the 10 layers, layer 10 may be considered as a root layer, layer 9 to layer 1 may be considered as remaining layers.
[0094] Further, for each layer 1, 1?l<N, an ordered list of partial symbol vectors with the K.sub.l smallest PEDs based on the surviving partial symbol vectors from Layer N. It should be noted that the algorithm ends upon reaching layer 1, where a final ordered list of K.sub.1=K symbol vectors is obtained as output. In one embodiment, an upper threshold for K.sub.l may be pre-set and may be determined based on ?r.sub.l,l?. The value of ?r.sub.l,l?is inversely proportional to the value of K.sub.l, for near optimal detection of the signal y.
[0095] Such usage of the LSB 304 in a receiver improves the receiver performance in terms of block error rate (BLER) using low decoding latency and hardware power consumption. Consequently, it should be noted that system throughput may be improved.
[0096] Further, the ordered list may be transmitted to the LLR computation block 306. It should be noted that the ordered list
is a list of N-dimensional vectors and each vector is a candidate constellation point for the transmitted signal based on a predefined metric. It should be noted that the predefined metric is Euclidean norm of y-Rs, which is the difference between the signal vector y and Rs, where s is the candidate point
[0097] The output of the LLR computation block 306 may be referred to as soft information. Further, the soft information may be fed to the channel decoder 218. In one embodiment, the channel decoder 218 may be, but is not limited to, belief propagation decoding, polar list-decoding, Turbo decoder, or convolutional decoder. In one embodiment, the channel decoder 218 may decode signals corresponding to each UE. Furthermore, it should be noted that both the quality of the computed LLRs and the complexity increase with increasing values of candidate points. In one embodiment, for each symbol in the vector s may take value from an M-QAM, there may be a total of N log M bits transmitted at a time. Further, for .sub.j.sup.+ denoting a set of all symbol vector s?Q.sup.N?1 that may be mapped in a way corresponding to a j.sup.th bit being 1 and
.sub.j.sup.? denoting a set of all symbol vector s?Q.sup.N?1 that may be mapped in a way corresponding to the j.sup.th bit being 0. Then, the LLR for the j.sup.th bit is computed as:
L(x.sub.j|y)={?y?Rs?}?
{?y?Rs?}, j=1,2, . . . , N log M.
[0098]
[0099] It should be noted that in the disclosed operation may be performed in each layer in the proposed algorithm. The list-search operation is facilitated by use of a pre-trained Machine Learning (ML) block 402. Further, for N layers, the algorithm begins at layer N and sequentially progresses to layers N?1, N?2, . . . 1. In one embodiment, the layers l, l?1, and l?2 may be considered, where 1 is a general layer index and may take any value from N to 3. It should be noted that the root layer may be referred to as layer N and the remaining layers may be referred to as layer l+1 where l?N?1, N?2, . . . 1, hereinafter.
[0100] At first, at layer l, K.sub.l surviving candidates, at 404, may progress to the subsequent layer i.e. layer l?1. Further, the algorithm may select K.sub.l surviving candidates in each layer by calculating less than K.sub.lM distances, due the presence of the pre-trained ML black 402. In one embodiment, the K.sub.l surviving candidates may refer to the partial symbol vectors. In addition to the K.sub.l surviving candidates in each layer l, the pre-trained ML block 402, may provide a smallest possible window of constellation points, at each layer l?1.
[0101] The pre-trained ML block 402 may determine a smallest window at each layer of the proposed algorithm. It should be noted that the use of the pre-trained ML block 402 may provide a window of constellation points. Further, the size of the window is much smaller than M. In one embodiment, the pre-trained ML block 402 outputs the smallest possible window of points from an M-QAM constellation that contains the closed points to a candidate point corresponding to the signal y. In one embodiment, the pre-trained ML block 402 may output K.sub.l?1,1 points, at layer l?1, to compute, at 406-1 along with an ordered 1.sup.st survivor from the previous layer l. Further, the pre-trained ML block 402 may output K.sub.l?1,2 points, at layer l?1, to compute, at 406-2, along with an ordered 2.sup.nd survivor from the previous layer l. It should be noted that the process may continue till the pre-trained ML block 402 may output K.sub.l?1,K1 npoints, at layer l?1, to compute, at 406-K.sub.l along with an ordered K.sub.l.sup.th survivor from the previous layer l.
[0102] In one embodiment, candidate point may be a complex point, comprising a real and an imaginary part. Further, the pre-trained ML block 402 may be trained offline to obtain the smallest window as a function of K and y. For a particular candidate point, K closest points from an M-QAM constellation Q may be identified. In accordance with the candidate point, closest constellation points may be determined from the M-QAM constellation Q. In one embodiment, the closest constellation points may be represented by a partial symbol vector of the closest constellation point S.sup.CCP. In one embodiment, the pre-trained ML block 402 may be a classifier. It should be noted that for particular layer, the pre-trained ML block 402 may determine the smallest possible window using the parameters related to the received signal y and parameters related to the partial symbol vectors associated with the signal y. In one embodiment, for layer N, the pre-trained ML block 402 may determine the smallest possible window using the parameters related to N.sup.th element of signal y and parameters related to the partial symbol vectors s.sub.N.
[0103] In one embodiment, the pre-trained ML block 402 may determine the smallest possible window based on the n input parameters. Further, the n input parameters may be functions of the signal y and functions of partial symbol vector of the closest constellation point to y from Q, denoted by s.sup.CCP. In one embodiment, at a layer l, for a candidate point y.sub.i and closest constellation points by s.sup.CCP (y.sub.l), K16, and 256 QAM defined as
QAM.sub.256={a+jb|a, b?{?15, ?13, ?11, . . . , 11, 13, 15}},
the n input parameters may be at least among a group of parameters consisting of, but not limited to real and imaginary functions of the signal y and s.sup.CCP, like (y.sub.l)|, |?(y.sub.l)|,
(S.sup.CCP(y.sub.l))|, |?(S.sup.CCP(y.sub.l))|, sign (|
(y.sub.l)|?|
(S.sup.CCP(y.sub.l))|), sign(|?(y.sub.l)|?|?(S.sup.CCP(y.sub.l))|, sign(|
(y.sub.l)|?15), sign(|
(y.sub.l)|?13), sig(|?(y.sub.l)|?15), sign(|?(y.sub.l)|?13), sign(
(y.sub.l?S.sup.CCP(y.sub.l))), and sign(?(y.sub.l?S.sup.CCP(y.sub.l)))
[0104] It should be noted that the usage of pre-trained ML block 402 eliminates the need to calculate the distance to all the M constellation points. Such a use of the pre-trained ML block 402 facilitates the reduction in computation complexities.
[0105] Furthermore, the calculated less than KM distances may be sorted and chosen, at 408, to obtain an ordered list corresponding to the layer l?1. The sorted K.sub.l?1 surviving candidates may then progress to the subsequent layer l?2. Further, the process may be repeated till the K surviving candidates reach layer l. In one embodiment, the layer l?2 may be referred to as the final layer or the last layer. Finally, K the surviving candidates of the of the final layer may provide the required ordered list .
[0106]
[0107] It should be noted that in
[0108] In one embodiment, the output of the pre-trained ML block 402 may be referred to as a tuple. Further the tuple may be represented by (L.sub.y, R.sub.y, T.sub.y, B.sub.y). Further, the tuple may represent the smallest possible window 602 as R.sub.y may represent the number of points in the smallest possible window 602 to the right of s.sup.CCP(on the X-axis). For the example in
={
(s.sup.CCP)+2m+j(?(s.sup.CCP)+2n)}
where m?{?L.sub.y, ?L.sub.y+1, . . . ,0, 1, . . . , R.sub.y}, and n?
. {?B.sub.y, ?B.sub.y+1, . . . ,0, 1, . . . , T.sub.y}. It should be noted that the real and imaginary parts of each point in
may be subject to maximum value of ?{square root over (M)}?1 and a minimum value of ?{square root over (M)}+1.
[0109]
[0110] At first, at the root layer, i.e. layer N, the input parameters associated to the signal y may be obtained, at step 702. In one embodiment, the input parameters may comprise at least a N.sup.th element of the signal y, a constellation size M may be obtained along with the number of surviving candidates K.sub.N among the constellation size M,. It should be noted that the N.sup.th element of signal y corresponds to signal y at the root layer i.e. layer N. Further, a closest constellation point S.sub.N.sup.CCP to the signal y, derived from the N.sup.th element of signal y based at least on the input parameters, may be determined at step 704. After determining the partial symbol vector s.sub.N.sup.CCP, the pre-trained ML block 402 may determine a first smallest possible window of a plurality of constellation points K.sub.N for a signal y that contain the K.sub.N surviving candidates, using a trained ML model, at step 706. The receiver is configured to train the ML model at least on a training data set with input features which are functions of a one-dimensional complex-valued signal y, the constellation size M, and the number of surviving candidates K.sub.N. As discussed above, the input parameters i.e. a real and imaginary functions of the s.sup.CCP for the pre-trained ML block 402 may be represented by
?(s.sub.N)=?y.sub.N?r.sub.NNs.sub.N?=?r.sub.NN???.sub.N?s.sub.N? where ?.sub.Ny.sub.N/r.sub.NN.
[0111] In one embodiment, the first smallest possible window may be determined by a design engineer. Based on the determined first smallest possible window, partial Euclidean distances (PEDs) between a candidate point of the signal y and each of the plurality of constellation points K.sub.N in the first smallest possible window may be determined, at step 708. The PED between the candidate point of the signal y and each of the plurality of constellation points in the first smallest possible window may be represented as
?.sub.min?r.sub.NN???.sub.N?s.sub.N.sup.CCP?=?r.sub.NN??{square root over ((
(?.sub.N?s.sub.N.sup.CCP)).sup.2+(?(?.sub.N?s.sub.N.sup.CCP)).sup.2)}
[0112] In one embodiment, the first smallest possible window may comprise the plurality of constellation points K.sub.N for a signal y derived from N.sup.th element of the signal y. Further, the plurality of constellation points K.sub.N in the first smallest possible window based on the determined PEDs may be sorted and chosen, at step 710. Based on the sorted plurality of constellation points, an ordered list .sub.N may be computed and stored in
.sub.N, at step 712. Further,
.sub.N, may be represented by
.sub.N
{s.sub.N.sup.(i)?Q, i=1,2, . . . , K.sub.N|?((s.sub.N.sup.(1))?, . . . , ??(s.sub.N.sup.(K.sup.
.sub.N},
[0113] In one embodiment, the determined partial Euclidean distances associated with the plurality of constellation points K.sub.N in the ordered list .sub.N may be stored in
.sub.N. In one embodiment,
.sub.N may store the squares of the PEDs of all the elements of the ordered list
.sub.N. Further,
.sub.N may be represented by
.sub.N
{?.sup.2(s.sub.N.sup.(i)), ?s.sub.N.sup.(i)?
.sub.N}
[0114]
[0115] At first, at the remaining layers, i.e. layers l, the input parameters associated to the signal y may be obtained. In one embodiment, the input parameters comprise at least an l.sup.th element of signal y, the constellation size M may be obtained along with a required number of surviving candidates K.sub.l among the constellation size M, at step 802. Further, the ordered list .sub.l and the list
.sub.l of the plurality of constellation points in the first smallest possible window may be obtained from the surviving candidate in the ordered list
.sub.l+1, at step 804. It should be noted that the ordered list
.sub.l and the list
.sub.l may be associated with a previous layer i.e. root layer in current scenario. Further, a closest constellation point for each of the K.sub.l+1 surviving candidate in the ordered list
.sub.l+1 may be determined, at step 806. In one embodiment, the K.sub.l+1 surviving candidates in the ordered list
.sub.l+1 may be sent to the pre-trained ML block 402. Further, for each of the closest constellation point, a partial symbol vector s.sub.l may be computed, at step 808. In one embodiment, the partial symbol vector s at the layer l may be represented by s.sub.l:N.sup.(i), and may be represented as
s.sub.l:N.sup.(i)[s.sub.l.sup.CCP(s.sub.l+1:N.sup.(i)), (s.sub.l+1:N.sup.(i)).sup.T].sup.T?Q.sup.(N?l+1)?1, i=1,2, . . . K.sub.l+1,
[0116] Further, the square of PEDs may be computed and may be represented as
?.sup.2(s.sub.l:N.sup.(i)), for i=1,2, . . . , K.sub.l+1
[0117] Thus
?.sup.2(s.sub.l:N.sup.(i))=?.sup.2(s.sub.l+1:N.sup.(i))+?r.sub.l,l?.sup.2??.sub.l(s.sub.l+1:N.sup.(i))?s.sub.l.sup.CCP(s.sub.l+1:N.sup.(i)?.sup.2.
[0118] For each of the closest constellation point, partial Euclidean distances (PEDs) between a candidate point of the l.sup.th element of the signal y and each determined closest constellation point, based on the partial symbol vector, may be computed, at step 810. Further, the list .sub.l+1 may be sorted based on the determined PEDs, at step 812. The sorted K.sub.l PEDs with i.sub.1, i.sub.2, . . . , i.sub.K.sub.
[0119] In one embodiment, a window of size K.sub.l,j<<M, may be computed using the pre-trained ML block 402 with ?.sub.l(s.sub.l+1:N.sup.(i.sup.
?.sup.2([s.sub.k,(s.sub.l+1:N.sup.(i.sup.
[0120] In one embodiment, the computed plurality of constellation points in the smallest possible window may be sorted in ascending order. Based on the sorted plurality of constellation points, a window with K.sub.l,i points for each element in the sorted list .sub.l+1, using the output of the pre-trained ML block 402, at step 814. Further, the distances between the K.sub.l,i points for each element with index i in the sorted list and the candidate point of the l+1.sup.th element of the signal y for the current layer may be computed, at step 816. Further, the plurality of closest constellation points K.sub.l,i in the smallest possible window based on the determined PEDs may be sorted and chosen, at step 818. Based on the sorted plurality of constellation points, an ordered list
.sub.l may be computed and stored, at step 820. In one embodiment, the
.sub.l may be represented as
[0121] It should be noted that a list .sub.l may be created to store metrics associated with the ordered list
.sub.t. In one embodiment,
.sub.l may store the squares of the PEDs of all the elements of the ordered list
.sub.l. In one embodiment, the
.sub.l may be referred to as
.sub.l and may be represented as
.sub.t
{?.sup.2(s.sub.l:N.sup.(i)), ?s.sub.l:N.sup.(i)?
}.
[0122] Further, the above algorithm may be applied to all the remaining layers leading to the ordered list represented by:
={s.sub.i?Q.sup.N?1, i=1,2, . . . , K|d(s.sub.1)?d(s.sub.2)?, . . . , ?d(s.sub.K)}.
[0123] In one embodiment, we consider a sklearn's decision tree classifier with criterion gini, class_weight=balanced, and the remaining default parameters. It should be noted that the results for the classifier are tabulated in Table 1. Further, to calculate a window size of each of each predicted class, the predicted class may be mapped back to the tuple (L.sub.y, R.sub.y, T.sub.y, B.sub.y), and the window size may be calculated as (L.sub.y+R.sub.y+1)(T.sub.y, +B.sub.y+1). Further, a misclassification may occur when the predicted (L.sub.y, R.sub.y, T.sub.y, B.sub.y) does not match with an optimal (L.sub.y, R.sub.y, T.sub.y, B.sub.y) which represents the smallest window containing the 16 closest point to y.sub.i.
TABLE-US-00001 TABLE 1 K 16 QAM constellation 256-QAM Training data size 1 Million Test data size 160000 Test Accuracy 99.82 (289/160000 misclassifications) Maximum number of closest 1 (count = 128/289) constellation points (out of 16) not in (indicates that the predicted window was the predicted window (across test smaller than the optimal window) samples) Minimum number of closest 0 (count = 161/289) constellation points (out of 16) not in (indicates that the predicted window was predicted window (across test larger than the optimal window) samples) Number of classes 81 (window combinations) Mean window size across classes 20.70 (class average) Mean window size across test samples 19.64 (sample average) Maximum window size 25 Minimum window size 16 Maximum depth of the tree 26 (indicates the maximum number of decisions needed to for a sample) Average depth of leaf node across test 5.7 (average number of samples decisions needed per sample)
[0124] In one embodiment, we consider MIMO communication in a 5G cellular network where a Base Station (BS) and UEs are equipped with multiple antennas. Further, the communication over the uplink (UL) channel where the BS receives the transmitted data from the UEs. Further, the scenario considers a massive MIMO system with transmitter and receiver beamforming. Table 2 lists the simulation parameters used for the evaluation:
TABLE-US-00002 TABLE 2 Simulation Parameter Value Carrier Frequency 3.5 GHz. Bandwidth 10 MHz Scenario Uplink: single-user/multi-user BS Physical Antenna Configuration 8 ? 8 ? 2 (M ? N ? P) (d.sub.H, d.sub.V) (0.5?, 0.7?) BS Antenna ports 2/4/8 UE Physical Antenna Configuration (1 ? 1 ? 2)/(2 ? 1 ? 2) (M ? N ? P) UE Antenna ports 2/4 Beamforming type Grid of Beams (GoB) Channel Model 3GPP CDL-A channel models MCS Values 3GPP MCS Table 1/2 Channel Estimation type Practical CE Receiver/Equalizer IRC/Proposed Algorithm Simulation time 5 sec
[0125]
[0126] As shown in the graph 900B, a Minimum Mean Square Error-Interference Rejection Combining (MMSE-IRC) and the proposed algorithm for Modulation and Coding Scheme (MCS)=28. A line (shown by 906) represents a MMSE-IRC and a line (shown by 908) represents the proposed algorithm for Modulation and Coding Scheme (MCS)=28.
[0127] As shown in the graphs 900C and 900D, a comparison between a Minimum Mean Square Error-Interference Rejection Combining (MMSE-IRC) and the proposed algorithm for Modulation for UE 1 and UE 2, respectively, with Modulation and Coding Scheme (MCS)=16. A line (shown by 910) represents a MMSE-IRC and a line (shown by 912) represents the proposed algorithm
[0128] Modulation for UE 1 with Modulation and Coding Scheme (MCS)=16. Further, a line (shown by 914) represents a MMSE-IRC and a line (shown by 916) represents the proposed algorithm Modulation for UE 2 with Modulation and Coding Scheme (MCS)=16.
[0129] It will be apparent to one skilled in the art that above-mentioned joint decoding of UEs scheduled in a MU-MIMO within a single cell has been provided only for illustration purposes. In one embodiment, additional impairments may be added on top of this scenario due to hardware, without departing from the scope of the disclosure. The above-mentioned joint decoding of UEs scheduled in a MU-MIMO may be within multiple cells in either a same cell site (intra-site) or even across different cell sites (inter-site). In another embodiment, the joint decoding of UEs scheduled in a MU-MIMO using antennas of one or more than one cell (like in Cooperative Multipoint, CoMP) operation.
[0130]
[0131] The processor 1002 includes suitable logic, circuitry, and/or interfaces that are operable to execute instructions stored in the memory to perform various functions. The processor 1002 may execute an algorithm stored in the memory for a receiver of a multiple input multiple output, MIMO, communication system 100. The processor 1002 may also be configured to decode and execute any instructions received from one or more other electronic devices or server(s). The processor 1002 may include one or more general-purpose processors (e.g., INTEL? or Advanced Micro Devices? (AMD) microprocessors) and/or one or more special-purpose processors (e.g., digital signal processors or Xilinx? System On Chip (SOC) Field Programmable Gate Array (FPGA) processor). The processor 1002 may be further configured to execute one or more computer-readable program instructions, such as program instructions to carry out any of the functions described in the description.
[0132] Further, the processor 1002 may make decisions or determinations, generate frames, packets or messages for transmission, decode received frames or messages for further processing, and other tasks or functions described herein. The processor 1002, which may be a baseband processor, for example, may generate messages, packets, frames or other signals for transmission via wireless transceivers. It should be noted that the processor 1002 may control transmission of signals or messages over a wireless network, and may control the reception of signals or messages, etc., via a wireless network (e.g., after being down-converted by wireless transceiver, for example). The processor 1002 may be (or may include), for example, hardware, programmable logic, a programmable processor that executes software or firmware, and/or any combination of these. Further, using other terminology, the processor 1002 along with the transceiver may be considered as a wireless transmitter/receiver system, for example.
[0133] The memory 1004 stores a set of instructions and data. Further, the memory 1004 includes one or more instructions that are executable by the processor to perform specific operations. Some of the commonly known memory implementations include, but are not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, Compact Disc Read-Only Memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, Random Access Memories (RAMs), Programmable Read-Only Memories (PROMs), Erasable PROMs (EPROMs), Electrically Erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, cloud computing platforms (e.g. Microsoft Azure and Amazon Web Services, AWS), or other type of media/machine-readable medium suitable for storing electronic instructions.
[0134] It will be apparent to one skilled in the art that the above-mentioned components of the apparatus 1000 have been provided only for illustration purposes. In one embodiment, the apparatus 1000 may include an input device, output device etc. as well, without departing from the scope of the disclosure.
[0135] Embodiments of the present disclosure may be provided as a computer program product, which may include a computer-readable medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The computer-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, Compact Disc Read-Only Memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, Random Access Memories (RAMs), Programmable Read-Only Memories (PROMs), Erasable PROMs (EPROMs), Electrically Erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). Moreover, embodiments of the present disclosure may also be downloaded as one or more computer program products, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
[0136] The detailed description section of the application should state that orders of method steps are not critical. Such recitations would later support arguments that the step order in a method claim is not critical or fixed. Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
[0137] While the above embodiments have been illustrated and described, as noted above, many changes can be made without departing from the scope of the embodiments. For example, aspects of the subject matter disclosed herein may be adopted on alternative operating systems. Accordingly, the scope of the embodiments is not limited by the disclosure of the embodiment. Instead, the embodiments should be determined entirely by reference to the claims that follow.