METHOD FOR MASSIVE UNSOURCED RANDOM ACCESS
20220052883 · 2022-02-17
Inventors
- FAOUZI BELLILI (WINNIPEG, CA)
- AMINE MEZGHANI (WINNIPEG, CA)
- VOLODYMYR SHYIANOV (WINNIPEG, CA)
- EKRAM HOSSAIN (WINNIPEG, CA)
Cpc classification
International classification
Abstract
A method for receiving, at a communications station which has a plurality of antennas, messages from a plurality of users in wireless communication with the communications station, comprising the steps of: (i) receiving, from the users, using the plurality of antennas, time-slotted information chunks formed from the messages of the users, wherein the information chunks are free of any encoded user-identifiers representative of the users transmitting the messages; (ii) after receiving all of the information chunks, estimating vectors representative of respective communication channels between the antennas and the users based on the received information chunks; (iii) grouping the information chunks based on the estimated vectors to form clusters of the information chunks respectively associated with the users; and (iv) recovering the messages of the users from the clusters of the information chunks.
Claims
1. A method for receiving, at a communications station which has a plurality of antennas, messages from a plurality of users in wireless communication with the communications station, the method comprising: receiving, from the users, using the plurality of antennas, time-slotted information chunks formed from the messages of the users, wherein the information chunks are free of any encoded user-identifiers representative of the users transmitting the messages; after receiving all of the information chunks, estimating vectors representative of respective communication channels between the antennas and the users based on the received information chunks; grouping the information chunks based on the estimated vectors to form clusters of the information chunks respectively associated with the users; and recovering the messages of the users from the clusters of the information chunks.
2. The method of claim 1 wherein grouping the information chunks based on the estimated vectors is performed using a Gaussian-mixture expectation-maximization algorithm.
3. The method of claim 1 wherein, when the information chunks are encoded by compressed sensing to reduce a size thereof, the method further includes decoding the information chunks using a hybrid generalized approximate message passing algorithm.
4. The method of claim 1 further including normalizing the vectors representative of respective communication channels before grouping the information chunks based on the estimated vectors.
5. The method of claim 4 wherein normalizing is performed using large-scale fading coefficients estimated using a compressed sensing recovery algorithm.
6. The method of claim 5 wherein the compressed sensing recovery algorithm is a hybrid generalized approximate message passing algorithm.
7. The method of claim 1 wherein estimating vectors representative of respective communication channels comprises applying, to the received information chunks, expectation-maximization and the Hungarian algorithm to determine slot-wise assignment matrices which satisfy constraints including (i) one information chunk per user per slot, and (ii) a common number of the information chunks for each user.
8. The method of claim 7 wherein grouping the information chunks based on the estimated vectors is based on the assignment matrices.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The invention will now be described in conjunction with the accompanying drawings in which:
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019] In the drawings like characters of reference indicate corresponding parts in the different figures.
DETAILED DESCRIPTION
[0020] With reference to the accompanying figures, there is described a system model of the novel algorithmic solution, the HyGAMP-based inner CS encoder/decoder, and the clustering-based stitching procedure of the decoded sequences. Performance of the proposed URA scheme is assessed using exhaustive computer simulations.
[0021] Common notations include: lower- and upper-case bold fonts, x and X, are used to denote vectors and matrices, respectively. Upper-case calligraphic font, X and X, is used to denote single and multivariate random variables, respectively, as well as for sets notation (depending on the context). The (m,n)th entry of X is denoted as X.sub.mn, and the nth element of x is denoted as x.sub.n. The identity matrix is denoted as I. The operator vec(X) stacks the columns of a matrix X one below the other. The shorthand notation χ˜(x; m, R) means that the random vector follows a complex circular Gaussian distribution with mean m and auto-covariance matrix R. Likewise, χ˜
(x; m, μ) means that the random variable χ follows a Gaussian distribution with mean m and variance μ. Moreover, {⋅}.sup.T and {⋅}.sup.H stand for the transpose and Hermitian (transpose conjugate) operators, respectively. In addition, |⋅| and ∥⋅∥ stand for the modulus and Euclidean norm, respectively. Given any complex number,
{⋅},
{⋅}, and {⋅}* return its real part, imaginary part, and complex conjugate, respectively. The Kronecker function and product are denoted as δ.sub.m,n and ß, respectively. We also denote the probability distribution function (pdf) of single and multivariate random variables (RVs) by p.sub.x(x) and p.sub.x(x), respectively. The statistical expectation is denoted as
{⋅}, j is the imaginary unit (i.e., j.sup.2=−1), and the notation
is used for definitions.
[0022] System Model and Assumptions
[0023] Consider a single-cell network consisting of K single-antenna devices which are being served by a base station located at the center of a cell of radius R. Devices are assumed to be uniformly scattered inside the cell, and we denote by r.sub.k (measured in meters) the distance from the kth device to the base station. This disclosure assumes sporadic device activity thereby resulting in a small number, K.sub.a<<K, of devices being active over each coherence block. The devices communicate to the base station through the uplink uncoordinated scheme, in which every active device wishes to communicate B bits of information over the channel in a single communication round. The codewords transmitted by active devices are drawn uniformly from a common Gaussian codebook ={{tilde over (c)}.sub.1, {tilde over (c)}.sub.2, . . . , {tilde over (c)}.sub.2B}⊂
.sup.n. More precisely, {tilde over (c)}.sub.b˜
(0, P.sub.tI) where n is the blocklength and P.sub.t is the transmit power. We model the device activity and codeword selection by a set of 2.sup.BK Bernoulli random variables δ.sub.b,k for k=1, . . . ,K and b=1, . . . ,2.sup.B
[0024] We consider a Gaussian multiple access channel (MAC) with a block fading model and a large-scale antenna array consisting of M.sub.r receive antenna elements at the BS. Assuming the channels remain almost unchanged over the entire transmission period, the uplink received signal at the mth antenna element can be expressed as follows:
{tilde over (y)}.sup.(m)=Σ.sub.k=1.sup.KΣ.sub.b=1.sup.2.sup.
[0025] The random noise vector, {tilde over (w)}.sup.(m), is modeled by a complex circular Gaussian random vector with independent and identically distributed (i.i.d.) components, i.e., {tilde over (w)}.sup.(m)˜(0, σ.sub.w.sup.2I). In addition, {tilde over (h)}.sub.k,m stands for the small-scale fading coefficient between the kth user and the mth antenna. We assume Rayleigh block fading, i.e., the small-scale fading channel coefficients, {tilde over (h)}.sub.k,m˜
(0,1), remain constant over the entire observation window which is smaller than the coherence time. Besides, g.sub.k is the large-scale fading coefficient of user k given by (in dB scale):
g.sub.k[dB]=−α−10βlog.sub.10(r.sub.k), (2)
where α is the fading coefficient measured at distance d=5 meter and β is the pathloss exponent. For convenience, we also define the effective channel coefficient by lumping the large- and small-scale fading coefficients together in one quantity, denoted as h.sub.k,m√{square root over (g.sub.k)}{tilde over (h)}.sub.k,m, thereby yielding the following equivalent model:
{tilde over (y)}.sup.(m)=Σ.sub.k=1.sup.KΣ.sub.b=1.sup.2.sup.
[0026] To define the random access code for this channel, let W.sub.k ∈[2.sup.B]{1,2, . . . , 2.sup.B} denote the message of user k, such that for some encoding function ƒ: [2.sup.B].fwdarw.
.sup.n, we have ƒ(W.sub.k)={tilde over (c)}.sub.b.sub.
outputs a list of K.sub.a decoded messages with the probability of error being defined as:
and E.sub.k{W.sub.k .Math.g({tilde over (y)}.sup.(1), {tilde over (y)}.sup.(2), . . . , {tilde over (y)}.sup.(M.sup.
{tilde over (y)}.sup.(m)={tilde over (C)}{tilde over (Δ)}h.sup.(m)+{tilde over (w)}.sup.(m),m=1, . . . ,M.sub.r, (5)
in which {tilde over (C)}=[{tilde over (c)}.sub.1, {tilde over (c)}.sub.2, . . . , {tilde over (c)}.sub.2.sub..sup.n×2.sup.
.sup.K is the multi-user channel vector at the mth antenna which incorporates the small- and large-scale fading coefficients. The matrix {tilde over (Δ)}∈{0,1}.sup.2.sup.
{tilde over (Δ)}h.sup.(m), it follows that:
{tilde over (y)}.sup.(m)={tilde over (C)}{tilde over (x)}.sup.(m)+{tilde over (w)}.sup.(m),m=1, . . . ,M.sub.r. (6)
.sup.1 The notation
standing for choosing K.sub.a different elements from the set [2.sup.B].
[0027] Note here that each active user contributes a single non-zero coefficient in {tilde over (x)}.sup.(m) thereby resulting in K.sub.a—sparse 2.sup.B—dimensional vector. Since K.sub.a is much smaller than the total number of codewords 2.sup.B, {tilde over (x)}.sup.(m) has a very small sparsity ratio
Observe also that the formulation in (6) belongs to the MMV class in compressed sensing terminology, which can be equivalently rewritten in a more succinct matrix/matrix form as follows:
{tilde over (Y)}={tilde over (C)}{tilde over (X)}+{tilde over (W)}, (7)
in which {tilde over (Y)}=[{tilde over (y)}.sup.(1), {tilde over (y)}.sup.(2), . . . , {tilde over (y)}.sup.(M.sup.
[0028] Proposed Unsourced Random Access Scheme Based on Compressive Sensing
[0029] A. Slotted Transmission Model and Code Selection
[0030] By revisiting (1), we see that the number of codewords grows exponentially with the blocklength n. Indeed, for a fixed rate
we have 2.sup.B=2.sup.nR—which becomes extremely large even at moderate values of n—thereby making any attempt to directly use standard sparse recovery algorithms computationally prohibitive. Practical approaches have been introduced to alleviate this computational burden, including the slotted transmission framework also adopted in this disclosure. Indeed, similar to [8], each active user partitions its B—bit message into L equal-size information bit sequences (or chunks). As opposed to [8], however, our approach does not require concatenated coding to couple the sequences across different slots (i.e., no outer binary encoder). Therefore, we simply share the bits uniformly between the L slots and do not optimize the sizes of the L sequences. In this way, there is a total number of
bits in each sequence (i.e., associated to each slot).
[0031] Let the matrix
denote the common codebook for all the users (over all slots). That is, the columns of Ã=[ã.sub.1, ã.sub.2, . . . , ã.sub.2.sub.
does not dominate the probability of incorrect decoding. The simulation results, presented later, suggest that J=17 is enough to keep the contribution of collision-induced errors to the overall error probability negligible.
[0032] B. Encoding
[0033] After partitioning each packet/message into L J-bit information sequences, the latter are encoded separately using the codebook,
which will serve as the sensing matrix for sparse recovery. Conceptually, we operate on a per-slot basis by associating to every possible J-bit information sequence a different column in the codebook matrix Ã. Thus, we can view this matrix as a set of potentially transmitted messages over the duration of a slot. The multiuser CS encoder can be visualized as an abstract multiplication of à by an index vector v. The positions of non-zero coefficients in v are nothing but the decimal representations of the information bit sequences/chunks being transmitted by the active users over a given slot. Thus, the slotted transmission of the B-bit packets of all the active users gives rise to L small-size compressed sensing instances (one per each slot). Now, after encoding its J-bit sequence, user k modulates the corresponding codeword and transmits it over the channel where it is being multiplied by a complex coefficient h.sub.k,m before reaching the mth antenna. Hence, the overall baseband model over each slot reduces to the MAC model previously discussed. Hence, by recalling (7), the received signal over the lth slot is given by:
{tilde over (Y)}.sub.l=Ã{tilde over (X)}.sub.l+{tilde over (W)}.sub.1,l=1, . . . ,L. (8)
Vectorizing (8) yields:
vec({tilde over (Y)}.sub.l.sup.T)=(Ã.sup.TßI)vec({tilde over (X)}.sub.l.sup.T)+vec({tilde over (W)}.sub.l.sup.T), (9)
in which ß denotes the Kronecker product of two matrices. Then, by defining
we recover the problem of estimating a sparse vector, {tilde over (h)}.sub.l, from its noisy linear observations:
{tilde over (y)}.sub.l={tilde over (x)}.sub.l+{tilde over (w)}.sub.l,l=1, . . . ,L. (10)
[0034] Finally, by defining
the goal is to reconstruct the unknown sparse vector, {tilde over (x)}.sub.l∈.sup.N, given by:
{{tilde over (x)}.sub.1,l}.sup.T, . . .
{{tilde over (x)}.sub.2.sub.
based on the knowledge of y.sub.l∈.sup.M and Ã∈
.sup.M×N. We emphasize here the fact that
{{tilde over (x)}.sub.j,l}=0 then .Math.{{tilde over (x)}.sub.j,l}=0.
[0035] For ease of exposition, we slightly rewrite (11) to end up with a convenient model in which we are interested in reconstructing the following group sparse vector:
which has independent sparsity among its constituent blocks {x.sub.j,l}.sub.j=1.sup.2.sup.
x.sub.l=
for some know permutation matrix,
y.sub.l=Ax.sub.l+w.sub.l with AĀ
[0036] C. Cs Recovery and Clustering-Based Stitching
[0037] The ultimate goal at the receiver is to identify the set of B-bit messages that were transmitted by all the active users. Since the messages were partitioned into L different chunks, we obtain an instance of unsourced MAC in each slot. The inner CS-based decoder now decodes, in each slot, the J-bit sequences of all the K.sub.a active users. The outer clustering-based decoder will put together the slot-wise decoded sequences of each user, so as to recover all the original transmitted B-bit messages (cf.
[0038] For each lth slot, the task is then to first reconstruct x.sub.l from y.sub.l=Ax.sub.l+w.sub.l given y.sub.l and A. To solve the joint activity detection and channel estimation problem, we resort to the HyGAMP CS algorithm [22] and we will also rely on the EM-concept [23] to learn the unknown hyperparameters of the model. In particular, we embed the EM algorithm inside HyGAMP to learn the variances of the additive noise and the postulated prior, which are both required to execute HyGAMP itself. HyGAMP makes use of large-system Gaussian and quadratic approximations for the messages of loopy belief propagation on the factor graph. As opposed to GAMP [24], HyGAMP is able to accommodate the group sparsity structure in x.sub.l by using a dedicated latent Bernoulli random variable, ε.sub.j, for each {j.sup.th}.sub.j=1.sup.2.sup.
ĥ.sub.k,l=
in which [
{h.sub.k}, .Math.{h.sub.k}].sup.T with h.sub.k
[h.sub.k,1, h.sub.k,2, . . . , h.sub.k,M.sub.
[0039] To cluster the reconstructed channels into K.sub.a different groups, we resort to the Gaussian mixture expectation-maximization procedure which consists of fitting a Gaussian mixture distribution to the data points in (16) under the assumption of Gaussian residual noise. We also assume the reconstruction noise to be Gaussian which is a common practice in the approximate message passing framework including HyGAMP. However, as seen from (11) the matrix Ā is not i.i.d Gaussian as would be required to rigorously prove the above claim which is a widely believed conjecture based on the concept of universality from statistical physics [25], [26]. Moreover, we will devise an appropriate constrained clustering procedure that enforces the following two constraints that are very specific to our problem: i) each cluster has exactly L data points, and ii) channels reconstructed over the same slot are not assigned to the same cluster.
[0040] D. Hybrid Approximate Message Passing
[0041] In this section, we describe the HyGAMP CS algorithm [22] by which we estimate the channels and decode the data in each slot. As a matter of fact, decoding the transmitted messages in slot l comes as a byproduct of reconstructing the entire group-sparse vector x.sub.l. This is because there is a one-to-one mapping between the positions of the non-zero blocks in x.sub.l and the transmitted codewords that are drawn from the common codebook Ã. As mentioned earlier, HyGAMP finds asymptotic MMSE estimates for the entries of the group-sparse vector, x.sub.l, in each slot l. To capture the underlying group sparsity structure, HyGAMP uses the following set of Bernoulli latent random variables:
which are i.i.d with the common prior
The marginal posterior probabilities, Pr(ε.sub.i=1|y.sub.l), for j=1,2, . . . , 2.sup.J are given by:
where LLR.sub.q.fwdarw.j(.sup.L) is updated in line 19 of Algorithm 1 (see
[0042] Note here that for the sake of simplicity, we assume the number of active users, K.sub.a, to be known to the receiver as is the case in all existing works on unsourced random access. Yet, we emphasize the fact that it is straightforward to generalize our approach to also detect the number of active users by learning the hyperparameter
using the EM procedure as done in [27]. Motivated by our recent results in [28], we also postulate a Bernoulli-Laplacian prior to model the channel coefficients. The main rationale behind choosing this prior is the use of a heavy-tailed distribution to capture the effect of the large-scale fading coefficients, √{square root over (g.sub.k)}, which vary drastically depending on the relative users' locations with respect to the BS. Indeed, unlike most AMP-based works on massive activity detection (e.g., [29]) which assume perfect knowledge of the large-scale fading coefficients, in our disclosure the latter are absorbed in the overall channel coefficients and estimated with them using HyGAMP. Therefore, we had to opt for a heavy-tailed prior to capture the rare events of getting an active user close to the base station and whose channel will be very large compared to most of the other faraway active users. In this respect, the Bernoulli-Laplacian prior was found to offer a good trade-off between denoising difficulty and model-mismatch. The Bernoulli-Laplacian is also computationally more attractive than other heavy-tailed priors since it requires updating only one parameter using the nested EM algorithm (inside HyGAMP) as will be explained later on. As an empirical evidence, in
with σ.sub.x=√{square root over (σ.sup.2/2)}, wherein σ.sup.2 is the empirical variance of the data that is extracted from active users' channels only, i.e., {h.sub.k,m}=√{square root over (g.sub.k)}
{{tilde over (h)}.sub.k,m}.
[0043] In the sequel, we provide more details about HyGAMP alone which runs according to the algorithmic steps provided in Algorithm 1. In our description, we assume that the hyperparameters σ.sub.x and σ.sub.w.sup.2 to be perfectly known to the receiver. Later on, we will explain how to also learn these two parameters from the data using the EM algorithm. For ease of exposition, the vector x.sub.l to be reconstructed, in slot l, will be generically denoted as x since HyGAMP will be executed in each slot separately (same thing for y.sub.l and all other quantities that depend on the slot index l). The underlying block-sparse vector, x, consists of V blocks each of which consisting of 2M.sub.r components, i.e.,
x[x.sub.1.sup.T,x.sub.2.sup.T, . . . ,x.sub.2.sub.
with
x.sub.j[x.sub.1,j,x.sub.2,j, . . . ,x.sub.2M.sub.
Similarly, the known sensing matrix, A, in (15) is partitioned into the corresponding 2.sup.J blocks as follows:
A=[A.sup.(1),A.sup.(2), . . . ,A.sup.(2.sup..sup.M×2M.sup.
Recall also that
and N=2.sup.J(2M.sub.r) denote the number of rows and columns in A, respectively.
[0044] HyGAMP passes messages along the edges of the factor graph pertaining to the model in (15) which is depicted in
p.sub.x(x.sub.q,j|∈.sub.j;σ.sub.x)=(1−∈.sub.j)δ(x.sub.q,j)+∈.sub.j(x.sub.q,j;σ.sub.x), (23)
where (x; σ.sub.x) is given in (19) and δ(x) is the Dirac delta distribution. The updates for {circumflex over (z)}.sub.i.sup.0 and μ.sub.i.sup.z (in lines 13 and 14) hold irrespectively of the prior since they depend only on the output distribution, namely,
p.sub.y|z(y|z;σ.sub.w.sup.2)=Π.sub.i=1.sup.Mp.sub.y.sub.
in which z=Ax. Due to the AWGN channel assumption, our output distribution is Gaussian and we have the following updates readily available from [24]:
The updates in lines 8 and 9, however, depend on the particular choice of the prior and as such are expressed as function of the other outputs of HyGAMP. In this disclosure, we only provide the final expressions of the required updates under the Bernoulli-Laplacian prior given in (23). We omit the derivation details for sake of briefness since they are based on some equivalent algebraic manipulations as recently done in [28] in the absence of group sparsity. For notational convenience, we also introduce the following intermediate quantities to express the required updates, for q=1, and j=1, . . . , 2.sup.J (some variables are defined in Algorithm 1):
[0045] It is worth mentioning here that those quantities depend on the unknown parameter σ.sub.x of the Laplacian distribution. Therefore, on top of being updated by HyGAMP, these σ.sub.x-dependent quantities are also updated locally by the nested EM algorithm that learns the unknown parameter σ.sub.x itself. We also define the following two intermediate σ.sub.x-independent quantities:
in which Q(⋅) is the standard Q-function, i.e., the tail of the normal distribution:
Using the above notations, we establish the closed-form expressions.sup.3 for {circumflex over (x)}.sub.q,j(t) required in line 8 of Algorithm 1 as given in (36). For ease of notation, we drop the iteration index, t, for all the statistical quantities updated by HyGAMP. The reader is referred to Algorithm 1 to keep track of the correct iteration count. The posterior variance, μ.sub.q,j.sup.x, required in line 9 is given by:
μ.sub.q,j.sup.x(t)=σ.sub.x.sub.
wherein σ.sub.x.sub.
and its closed-form expression is given by (37). .sup.3 For full derivation details, see the most recent Arxiv version.
The closed-form expression for the LLR update in line 19 of Algorithm 1 was also established as follows:
We also resort to the maximum likelihood (ML) concept in order to estimate the unknown hyperparameters σ.sub.x and σ.sub.w.sup.2. More specifically, the ML estimate of the noise variance is given by:
where {circumflex over (z)}.sub.i(A{circumflex over (x)}).sub.i. Unfortunately, the ML estimate (MLE), {circumflex over (σ)}.sub.x, of σ.sub.x, cannot be found in closed form and we use the EM algorithm instead to find the required MLE iteratively. Indeed, starting from some initial guess, {circumflex over (σ)}.sub.x;0, we establish the (d+1)th MLE update as follows:
in which the quantities ψ.sub.q,j;d and κ.sub.q,j;d are given by:
Note here that γ.sub.q,j;d.sup.+, γ.sub.q,j;d.sup.−, α.sub.q,j.sup.+, and α.sub.q,j;d.sup.− involved in (41)-(42) are also expressed as in (28)-(31), except the fact that σ.sub.x is now replaced by {circumflex over (σ)}.sub.x;d,
[0046] E. Constrained Clustering-Based Stitching Procedure
[0047] In this section, we focus on the problem of clustering the reconstructed channels from all the slots to obtain one cluster per user. By doing so, it will be easy to cluster (i.e., stitch) the slot-wise decoded sequences of all users so as to recover their transmitted messages/packets. To that end, we first estimate the large-scale fading coefficients from the outputs of HyGAMP as follows:
where ĥ.sub.k,l is the kth reconstructed channel in slot l. The estimates of the different large-scale fading coefficients are required to re-scale the reconstructed channels before clustering. This is in order to avoid, for instance, having the channels of the cell-edge users clustered together due to their strong pathloss attenuation. To that end, we divide each ĥ.sub.k,l in (16) by the associated √{square root over (ĝ.sub.k,l)} in (43) but keep using the same symbols, ĥ.sub.k,l, for notational convenience.
[0048] We can then visualize (16)—after normalization—as one whole set of K.sub.aL data points in .sup.2M.sup.
={ĥ.sub.k,l|k=1, . . . ,K.sub.a,l=1, . . . ,L}. (44)
which gathers all the reconstructed small-scale fading coefficients pertaining to all K.sub.a active users and all L slots. Since the small scale-fading coefficients of each user are assumed to be Gaussian distributed, we propose to fit a Gaussian mixture distribution to the entire data set, , and use the EM algorithm to estimate the parameters of the involved mixture densities along with the mixing coefficients.
[0049] The rationale behind the use of clustering is our prior knowledge about the nature of the data set . Indeed, we know that there are K.sub.a users whose channels remain constant over all the slots. Therefore, each user contributes exactly L data points in
which are noisy estimates of its true channel vector. Our goal is hence to cluster the whole data set into K.sub.a different clusters, each of which having exactly L vectors. To do so, we denote the total number of data points in
by N.sub.tot
K.sub.aL and assume that each data point is an independent realization of a Gaussian-mixture distribution with K.sub.a components:
(ĥ;π,μ,Σ)=Σ.sub.k=1.sup.K.sup.
(ĥ;μ.sub.k,Σ.sub.k). (45)
Here, π[π.sub.1, . . . , π.sub.K.sub.
[μ.sub.1, . . . , μ.sub.K.sub.
[Σ.sub.1, . . . , Σ.sub.K.sub.
[0050] The assumption we make here is justified by the fact that the residual reconstruction noise of AMP-like algorithms (including HyGAMP) is Gaussian-distributed. Notice that in (45) we considered a mixture of K.sub.a components, which amounts to assigning a Gaussian distribution to each active user. We now turn our attention to finding the likelihood function of all the unknown parameters.sup.4, {π.sub.k, μ.sub.k, Σ.sub.k}.sub.k=1.sup.K.sup., i.e.:
respectively, from top to bottom Owing to the i.i.d assumption on the data, the associated likelihood function factorizes as follows:
(ĥ.sub.1, . . . ,ĥ.sub.N.sub.
(ĥ.sub.n;π,μ,Σ). (48)
Taking the logarithm of (48) yields the following log-likelihood function (LLF):
Our task is to then maximize the LLF with respect to the unknown parameters, i.e.:
[0051] Unfortunately, it is not possible to obtain a closed-form solution to the above optimization problem. Yet, the EM algorithm can again be used to iteratively update the ML estimates of the underlying parameters. In the sequel, we will provide the resulting updates, and we refer the reader to Chap. 9 of [30] for more details. .sup.4 Note here that we refer to each μ.sub.k and Σ.sub.k as parameters although strictly speaking they are vectors and matrices of unknown parameters.
[0052] We initialize the means, {μ.sub.k}.sub.k=1.sup.K.sup.
[0053] In the E-STEP, we compute the probability of having a particular data point belong to each of the K.sub.a users. In the M-STEP, we update the means, covariance matrices, and the mixing coefficients for each of the clusters. We evaluate the LLF at each iteration to check the convergence of the EM-based algorithm, hence the Eval-STEP. Recall, however, that we are actually dealing with a constrained clustering problem since it is mandatory to enforce the following two intuitive constraints: [0054] Constraint 1: Channels from the same slot cannot be assigned to the same user, [0055] Constraint 2: Users/clusters should have exactly L channels/data points.
[0056] At convergence, the EM algorithm returns a matrix, P, of posterior membership probabilities, i.e., whose (n, k)t h entry is P.sub.nk=Pr(ĥ.sub.n∈cluster k). Since the EM solves an unconstrained clustering problem, relying directly on P would result in having two channels reconstructed from the same slot being clustered together, thereby violating “constraint 1” and/or “constraint 2”. In what follows, we will still make use of P in order to find the best possible assignment of the N.sub.0 reconstructed channels to the K.sub.a users (i.e., the one that minimizes the probability of error) while satisfying the two constraints mentioned above.
[0057] To enforce “constraint 2”, we begin by partitioning P into L equal-size and consecutive blocks, i.e., K.sub.a×K.sub.a matrices {P.sup.(l)}.sub.l=1.sup.L, as follows:
Then, since each kth row in P.sup.(l) sums to one, it can be regarded as a distribution of some categorical random variable, .sub.k,l, that can take on one of K.sub.a possible mutually exclusive states. For convenience, we represent these categorical random variables by 1-of-K.sub.a binary coding scheme. That is, each
.sub.k,l is represented by a K.sub.a-dimensional vector v.sub.k,l which takes values in {e.sub.1, e.sub.2, e.sub.K.sub.
.
[0058] We enforce “constraint 1” by using the following posterior joint distribution on {.sub.k,l }.sub.k=1.sup.K.sup.
where (⋅) is the indicator function and V.sub.l
[v.sub.1,l, . . . , v.sub.K.sub.
p.sub.ν.sub.(v.sub.k,l=e.sub.k′)}, (60)
in which
α.sub.k,k′.sup.(l)=log p.sub.ν.sub.
Since our optimality criteria is the largest-probability assignment, we maximize the distribution in (59) which when combined with (60) yields:
Now, finding the optimal assignment inside slot l, subject to constraint 1, amounts to finding the optimal assignment matrix, {circumflex over (V)}.sub.l, that maximizes the constrained posterior joint distribution, p.sub.
Owing to (62), it can be shown that finding {circumflex over (V)}.sub.l is equivalent to solving the following constrained optimization problem:
Note that the constraints in (65) enforce the solution to be a permutation matrix. This follows from the factor, (V.sub.l∈
), in the posterior distribution established in (62) which assigns zero probability to non-permutation matrices.
[0059] This optimization problem can be solved in polynomial time by means of the Hungarian algorithm which has an overall complexity in the order of (K.sub.a.sup.3). Stitching is achieved by means of the optimal assignment matrices, {{circumflex over (V)}.sub.l}.sub.l=1.sup.L, which are used to cluster the reconstructed sequences, thereby recovering the original transmitted messages.
[0060] Simulation Results
[0061] A. Simulation Parameters
[0062] In this section, we assess the performance of the proposed URA scheme using exhaustive Monte-Carlo computer simulations. Our performance metric is the probability of error given in (4). We fix the number of information bits per user/packet to B=102, which are communicated over L=6 slots. This corresponds to J=17 information bits per slot. We also fix the bandwidth to W=10 MHz and the noise power to P.sub.w=10.sup.−19.9×W [Watts]. The path-loss parameters in (2) are set to α=15.3 dB and β=3.76. The users are assumed to be uniformly scattered on an annulus centered at the base station and with inner and outer radiuses, R.sub.in=5 meters and R.sub.out=1000 meters, respectively. The distribution of each kth user random distance, R.sub.k, from the base station is hence given by:
In the following, our baseline is the covariance-based scheme introduced recently in [15] which is simply referred to as CB-CS in this disclosure. For the CB-CS algorithm we fix the number of information bits per user/packet to B=104 bits which are communicated over L=17 slots. The parity bit allocation for the outer tree code was set to p=[0,8,8, . . . ,14]. We also use J=14 coded bits per slot which leads to the total rate of the outer code R.sub.out=0.437.\
[0063] B. Results
[0064]
[0065] As can be seen from
[0066] It is worth mentioning, however, that by eliminating the outer code one expects a net gain of 1/R.sub.outer in spectral efficiency under perfect CSI. In this respect, we emphasize the fact that at P.sub.t=20 dBm and small spectral efficiencies it was found that HyGAMP provides quasi-perfect CSI. Therefore, the error floor observed at the low-end spectral efficiency in
[0067] Now, we turn the tables and fix the total spectral efficiency to μ.sub.tot=7.5 bits/channel-use while varying the number of active users from K.sub.a=50 to K.sub.a=300. The performance of both URA schemes is depicted in
[0068] In
wherein J.sub.0(⋅) is the zero-order Bessel function of the first kind, ƒ.sub.c=2 GHz is the carrier frequency, ν is the relative velocity between the receiver and the transmitter, and c=3×10.sup.8 m/s is the speed of light. In
[0069] 1. Typical pedestrian scenario (v=5 km/h) for which δ=0.00002,
[0070] 2. Typical urban scenario (v=60 km/h) for which δ=0.00312,
[0071] 3. Typical vehicular scenario (v=120 km/h) for which δ=0.01244.
The simulation results shown in
CONCLUSION
[0072] There is disclosed a new algorithmic solution to the unsourced random access problem that is also based on slotted transmissions. As opposed to all existing works, however, the proposed scheme relies purely on the rich spatial dimensionality offered by large-scale antenna arrays instead of coding-based coupling for sequence stitching purposes. HyGAMP CS recovery algorithm has been used to reconstruct the users channels and decode the sequences on a per-slot basis. Afterwards, the EM framework together with the Hungarian algorithm have been used to solve the underlying constrained clustering/stitching problem. The performance of the proposed approach has been compared to the only existing URA algorithm, in the open literature. The proposed scheme provides performance enhancements in a high spectral efficiency regime. There are many possible avenues for future work. The two-step procedure of channel estimation and data decoding is overall sub-optimal. Therefore, it is desirable to devise a scheme capable of jointly estimating the random permutation and the support of the unknown block-sparse vector in each slot. In addition, it will be interesting to improve the proposed scheme by exploiting the fact that the same set of channels are being estimated across different slots. We also believe that making further use of the large-scale fading coefficients can be a fruitful direction for future research.
[0073] As described hereinbefore, the present invention relates to a new algorithmic solution to the massive unsourced random access (URA) problem, by leveraging the rich spatial dimensionality offered by large-scale antenna arrays. This disclosure makes an observation that spatial signature is key to URA in massive connectivity setups. The proposed scheme relies on a slotted transmission framework but eliminates concatenated coding introduced in the context of the coupled compressive sensing (CCS) paradigm. Indeed, all existing works on CCS-based URA rely on an inner/outer tree-based encoder/decoder to stitch the slot-wise recovered sequences. This disclosure takes a different path by harnessing the nature-provided correlations between the slot-wise reconstructed channels of each user in order to put together its decoded sequences. The required slot-wise channel estimates and decoded sequences are first obtained through the hybrid generalized approximate message passing (HyGAMP) algorithm which systematically accommodates the multiantenna-induced group sparsity. Then, a channel correlation-aware clustering framework based on the expectation-maximization (EM) concept is used together with the Hungarian algorithm to find the slot-wise optimal assignment matrices by enforcing two clustering constraints that are very specific to the problem at hand. Stitching is then accomplished by associating the decoded sequences to their respective users according to the ensuing assignment matrices. Exhaustive computer simulations reveal that the proposed scheme can bring performance improvements, at high spectral efficiencies, as compared to a state-of-the-art technique that investigates the use of large-scale antenna arrays in the context of massive URA.
[0074] The scope of the claims should not be limited by the preferred embodiments set forth in the examples but should be given the broadest interpretation consistent with the specification as a whole.
REFERENCES
[0075] [1] E. Dutkiewicz, X. Costa-Perez, I. Z. Kovacs, and M. Mueck, “Massive machine-type communications,” IEEE Network, vol. 31, no. 6, pp. 6-7, 2017. [0076] [2] L. Liu, E. G. Larsson, W. Yu, P. Popovski, C. Stefanovic, and E. De Carvalho, “Sparse signal processing for grant-free massive connectivity: A future paradigm for random access protocols in the internet of things,” IEEE Signal Processing Magazine, vol. 35, no. 5, pp. 88-99, 2018. [0077] [3] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy of Sciences, vol. 106, no. 45, pp. 18 914-18 919, 2009. [0078] [4] M. Bayati and A. Montanari, “The dynamics of message passing on dense graphs, with applications to compressed sensing,” IEEE Transactions on Information Theory, vol. 57, no. 2, pp. 764-785, 2011.14 [0079] [5] L. G. Roberts, “Aloha packet system with and without slots and capture,” ACM SIGCOMM Computer Communication Review, vol. 5, no. 2, pp. 28-42, 1975. [0080] [6] E. Paolini, C. Stefanovic, G. Liva, and P. Popovski, “Coded random access: Applying codes on graphs to design random access protocols,” IEEE Communications Magazine, vol. 53, no. 6, pp. 144-150, 2015. [0081] [7] Y. Polyanskiy, “A perspective on massive random-access,” pp. 2523-2527, 2017. [0082] [8] V. K. Amalladinne, A. Vem, D. K. Soma, K. R. Narayanan, and J.-F. Chamberland, “A coupled compressive sensing scheme for uncoordinated multiple access,” arXiv preprint arXiv:1806.00138v1, 2018. [0083] [9] A. Fengler, P. Jung, and G. Caire, “SPARCs for unsourced random access,” arXiv preprint arXiv:1901.06234, 2019. [0084] [10] A. Joseph and A. R. Barron, “Least squares superposition codes of moderate dictionary size are reliable at rates up to capacity,” IEEE Transactions on Information Theory, vol. 58, no. 5, pp. 2541-2557, 2012. [0085] [11] R. Calderbank and A. Thompson, “CHIRRUP: a practical algorithm for unsourced multiple access,” arXiv preprint arXiv:1811.00879, 2018. [0086] [12] A. K. Pradhan, V. K. Amalladinne, K. R. Narayanan, and J.-F. Chamberland, “Polar coding and random spreading for unsourced multiple access,” arXiv preprint arXiv:1911.01009, 2019. [0087] [13] V. K. Amalladinne, J.-F. Chamberland, and K. R. Narayanan, “An enhanced decoding algorithm for coded compressed sensing,” arXiv preprint arXiv:1910.09704, 2019. [0088] [14] A. Pradhan, V. Amalladinne, A. Vem, K. R. Narayanan, and J.-F. Chamberland, “A joint graph based coding scheme for the unsourced random access gaussian channel,” arXiv preprint arXiv:1906.05410, 2019. [0089] [15] A. Fengler, G. Caire, P. Jung, and S. Haghighatshoar, “Massive MIMO unsourced random access,” arXiv preprint arXiv:1901.00828, 2019. [0090] [16] S. Haghighatshoar, P. Jung, and G. Caire, “A new scaling law for activity detection in massive mimo systems,” arXiv preprint arXiv:1803.02288, 2018. [0091] [17] A. Fengler, S. Haghighatshoar, P. Jung, and G. Caire, “Non-bayesian activity detection, large-scale fading coefficient estimation, and unsourced random access with a massive mimo receiver,” arXiv preprint arXiv:1910.11266, 2019. [0092] [18] J. Ziniel and P. Schniter, “Efficient high-dimensional inference in the multiple measurement vector problem,” IEEE Transactions on Signal Processing, vol. 61, no. 2, pp. 340-354, 2012. [0093] [19] D. Needell and J. A. Tropp, “Cosamp: Iterative signal recovery from incomplete and inaccurate samples,” Applied and computational harmonic analysis, vol. 26, no. 3, pp. 301-321, 2009. [0094] [20] J. Friedman, T. Hastie, and R. Tibshirani, “A note on the group lasso and a sparse group lasso,” arXiv preprint arXiv:1001.0736, 2010. [0095] [21] M. J. Wainwright, High-dimensional statistics: A non-asymptotic viewpoint. Cambridge University Press, 2019, vol. 48. [0096] [22] S. Rangan, A. K. Fletcher, V. K. Goyal, and P. Schniter, “Hybrid generalized approximate message passing with applications to structured sparsity,” in 2012 IEEE International Symposium on Information Theory Proceedings. IEEE, 2012, pp. 1236-1240. [0097] [23] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the em algorithm,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 39, no. 1, pp. 1-22,1977. [0098] [24] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), St. Petersburg, Russia, July 2011, pp. 2168-2172. [0099] [25] A. M. Tulino, G. Caire, S. Verdu, and S. Shamai, “Support recovery with sparsely sampled free random matrices,” IEEE Transactions on Information Theory, vol. 59, no. 7, pp. 4243-4271,2013. [0100] [26] A. Abbara, A. Baker, F. Krzakala, and L. Zdeborov′a, “On the universality of noiseless linear estimation with respect to the measurement matrix,” Journal of Physics A: Mathematical and Theoretical, vol. 53, no. 16, p. 164001,2020. [0101] [27] J. P. Vila and P. Schniter, “Expectation-maximization Gaussian-mixture approximate message passing,” IEEE Trans. Signal Process, vol. 61, no. 19, pp. 4658-4672, October 2013. [0102] [28] F. Bellili, F. Sohrabi, and W. Yu, “Generalized approximate message passing for massive MIMO mmWave channel estimation with Laplacian prior,” IEEE Transactions on Communications, vol. 67, no. 5, pp. 3205-3219,2019. [0103] [29] L. Liu and W. Yu, “Massive connectivity with massive mimopart i: Device activity detection and channel estimation,” IEEE Transactions on Signal Processing, vol. 66, no. 11, pp. 2933-2946,2018. [0104] [30] C. M. Bishop, Pattern recognition and machine learning. springer, 2006. [0105] [31] T. Lesieur, C. De Bacco, J. Banks, F. Krzakala, C. Moore, and L. Zdeborova, “Phase transitions and optimal algorithms in high-dimensional gaussian mixture clustering,” in 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2016, pp. 601-608.