METHOD, SYSTEM, AND COMPUTER PROGRAM FOR OPTIMIZING A CAPACITY OF A COMMUNICATION CHANNEL USING A DIRICHLET PROCESS

20230216708 · 2023-07-06

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a method for optimizing a capacity of a communication channel in a communication system comprising at least a transmitter, a receiver, said communication channel between the transmitter and the receiver, and a channel conditional probability distribution estimator. The transmitter transmits messages conveyed by a signal associated with a transmission probability according to an input signal probability distribution estimated by said estimator. The transmitter takes at least the messages as inputs and outputs the signal to be transmitted on the channel. The channel takes the transmitted signal as an input and outputs a received signal which is processed at the receiver in order to decode the transmitted message. The probability distribution for each possible transmitted signal is estimated from at least output signals received at the receiver, and by using a collapsed Gibbs sampling relying on a Dirichlet process.

Claims

1. A method for optimizing a capacity of a communication channel in a communication system comprising at least a transmitter, a receiver, said communication channel between the transmitter and the receiver, and a channel conditional probability distribution estimator; the transmitter transmitting messages conveyed by a signal associated with a transmission probability according to an input signal probability distribution estimated by said estimator, the transmitter taking at least the messages as inputs and outputting the signal to be transmitted on the channel, said channel taking the transmitted signal as an input and outputting a received signal which is processed at the receiver in order to decode the transmitted message, said probability distribution being noted p.sub.Y|X(y|x) relating to an output Y, corresponding to the received signal, given an input X, corresponding to the transmitted signal X, and representing thus a channel conditional probability distribution of a probability of outputting a received signal Y, when the transmitted signal Xis given, wherein the probability distribution is estimated as an approximation of the channel conditional probability distribution p.sub.Y|X(y|x) by using a functional basis of probability distributions, the channel conditional probability distribution being thus approximated, for each possible signal sent by the transmitter, by a mixture model with mixing probability distribution from said functional basis, the probability distribution for each possible transmitted signal being estimated from at least output signals received at the receiver, and using a collapsed Gibbs sampling relying on a Dirichlet process.

2. The method according to claim 1, wherein said functional basis is a family of exponential functions.

3. The method according to claim 2, wherein the probability distribution estimation is based on an approximation of the channel conditional probability distribution relying on a decomposition into a basis of probability distributions functions g(y|x; θ), where θ is a set of parameters θ.sub.j, said distribution function being the exponential family and the parameters are a mean and a variance, such that the approximation of p.sub.Y|X(y|x) in said basis of probability distributions functions p(y|x; θ) is given by p.sub.Y|X(y|x)≅Σ.sub.j=1.sup.Nw.sub.jg(y|x; θ.sub.j), where N, and the sets {θ.sub.j}, {w.sub.j} are parameters to be estimated.

4. The method according to claim 3, wherein said probability distributions functions g(y|x; θ) being from the exponential distribution family, the prior on θ is conjugate to a corresponding exponential distribution with a parameter λ, with λ={λ.sub.1, λ.sub.2} denoting a hyperparameter for the conjugate prior on the parameter λ, And wherein: N clusters are built for parameter sets {θ.sub.j}, {w.sub.j}, each cluster being used to group symbols together, the clusters being implemented by associating an integer index from 1 to N for each group of symbols, a cluster index is assigned for the i-th sample being noted c.sub.i so that c.sub.−i is a cluster assignment index for all clusters but the ith, a posterior distribution p(c.sub.i=k|x.sub.1:n, y.sub.1:n, c.sub.−i, λ) is computed where x.sub.1:n represents the symbols of the transmitted signal and y.sub.1:n represents the input symbols of the received signal, and a sampling from the posterior distribution is performed in order to update parameters {θ.sub.j}, {w.sub.j}, the parameters N, {θ.sub.j}, {w.sub.j} being thus estimated to provide the approximation of the channel conditional probability distribution p.sub.Y|X(y|x)≅Σ.sub.j=1.sup.Nw.sub.jg(y|x; θ.sub.j).

5. The method according to claim 4, wherein the updated parameters {θ.sub.j}, {w.sub.j} are sent to the transmitter and/or to the receiver so as to compute said conditional probability distribution estimation.

6. The method according to claim 4, wherein the posterior distribution:
p(c.sub.i=k|x.sub.1:n,y.sub.1:n,c.sub.−i,λ) is given by:
p(c.sub.i=k|x.sub.1:n,y.sub.1:n,c.sub.−i,λ)=p(c.sub.i=k|c.sub.−i)p(y.sub.i|x.sub.1:i−1,x.sub.i+1:n,y.sub.1:i−1,y.sub.i+1:n,c.sub.−i,c.sub.i=k,λ), where: p(c.sub.i=k|c.sub.−i) is computed according to a Polya urn scheme,
and (y.sub.i|x.sub.1:i−1,x.sub.i+1:n,y.sub.1:i−1,y.sub.i+1:n,c.sub.−i,c.sub.i=k,λ)=∫P(y.sub.i|x.sub.i,θ)p(θ|x.sub.1:i−1,x.sub.i+1:n,y.sub.1:i−1,y.sub.i+1:n,c.sub.−i,c.sub.i=k,λ)

7. The method of claim 6, wherein it is furthermore checked whether a new cluster is to be created so as to determine a best value of parameter N, providing an optimum approximation of the channel conditional probability distribution.

8. The method of claim 7, wherein it is randomly chosen to create a new cluster from a probability verifying:
p(y.sub.i|x.sub.i;P.sub.0)=∫p(y.sub.i|x.sub.i;θ)dP.sub.0, where P.sub.0 is the conjugate to p(y|x;θ)

9. The method according tom claim 1, applied for only one predefined value of the input X, and wherein the input X is known and the associated output Y, in a training mode, is considered only when the input X equals said predefined value.

10. The method according to claim 1, wherein, in a tracking mode, an input signal probability distribution p.sub.X(x) is taken into account for the channel conditional probability distribution estimation, a value of a transmitted symbol x being inferred with a Bayesian approach so that each observation y contributes to an update of p.sub.Y|X(y|x) with a weight corresponding to the probability of y being associated to x, from a current estimation on the condition probability distribution.

11. The method of claim 10, wherein a model for non-parametric joint density is defined as:
p(y,x)=Σ.sub.j=1.sup.Nw.sub.jg(y|x,θ.sub.j)p(x|ψ.sub.j) where parameters {θ.sub.j}, {w.sub.j}, and {ψ.sub.j} denote conditional density parameters, and j being a cluster index, the parameters (θ, ψ) being jointly obtained from a base measure of Dirichlet Process “DP” such that (θ, ψ)˜DP(αP.sub.0θ×P.sub.0ψ) where α is a scaling parameter.

12. The method according to claim 1, wherein, said set of signals being represented by a set of respective symbols, parameters of said mixture model are optimized for defining optimized symbols positions and probabilities to be used at the transmitter, in order to optimize the capacity of the approximated channel, the optimized symbols positions and probabilities being then provided to the transmitter and/or to the receiver.

13. A system comprising at least a transmitter, a receiver, a communication channel between the transmitter and the receiver, and a channel conditional probability distribution estimator for implementing the method as claimed in claim 1, wherein the estimator: Obtains information relative to the transmitted signal (x; p.sub.X(x)) used at the transmitter; Receives a signal y transmitted through the channel and characterized by a channel conditional probability distribution p.sub.Y|X(y|x); Estimates a channel conditional probability distribution from the received signal, the channel conditional probability distribution being approximated by a model of distributions relying on a basis of functions; Sends the estimated channel conditional probability distribution to the receiver and/or to the transmitter.

14. The system of claim 13, further comprising an input signal probability distribution optimizer for performing the method: wherein said set of signals being represented by a set of respective symbols, parameters of said mixture model are optimized for defining optimized symbols positions and probabilities to be used at the transmitter, in order to optimize the capacity of the approximated channel, the optimized symbols positions and probabilities being then provided to the transmitter and/or to the receiver, and wherein the input signal distribution optimizer: Obtains the estimated channel conditional probability distribution p.sub.Y|X(y|x) Optimizes the input distribution p.sub.X(x) Sends the optimized input signal distribution to the transmitter and/or to the receiver, and to the channel conditional probability distribution estimator.

15. A computer program comprising instructions for implementing the method as claimed in claim 1, when such instructions are executed by a processor.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0055] More details and advantages of the invention will be understood when reading the following description of embodiments given below as examples, and will appear from the related drawings where:

[0056] FIG. 1 shows an overview of a general system according to an example of implementation.

[0057] FIG. 2 shows steps implemented by the conditional probability distribution estimator 13 illustrated in FIG. 1, as an example of embodiment.

[0058] FIG. 3 shows steps implemented by the input signal distribution optimizer 14 illustrated in FIG. 1, as an example of optional embodiment.

[0059] FIG. 4 shows steps implemented by the transmitter 10 illustrated in

[0060] FIG. 1, as an example of embodiment.

[0061] FIG. 5 shows steps implemented by the receiver 11 illustrated in FIG. 1, as an example of embodiment.

[0062] FIG. 6 illustrates estimation of the channel probability distribution at the receiver from y and x (without the optimizer 14).

[0063] FIG. 7 shows an example of implementation of a Dirichlet process and Gibbs sampling for x belonging to {+1, −1, . . . , −3}.

[0064] FIG. 8 shows a system according to an optional embodiment of the invention involving an input optimizer 14.

[0065] FIG. 9 shows an implementation to estimate the channel probability distribution at the receiver from y and the input signal distribution p.sub.X(x) optimization.

[0066] FIG. 10 shows an example of implementation of a modified Dirichlet process and Gibbs sampling for x belonging to {+1, −1, . . . , −3}, according to the embodiment of FIG. 9.

[0067] FIG. 11 shows details of the steps which can be implemented to estimate the channel probability distribution, and can represent thus an example of an algorithm of a possible computer program according to the present disclosure.

DESCRIPTION OF EMBODIMENTS

[0068] It is described hereafter a context of the present disclosure where, in addition to the estimation of the channel conditional probability distribution, an optimization of the channel conditional probability distribution is furthermore performed according to an optional embodiment. These two steps (estimation and optimization) can then rely either on the knowledge of the input signal, or of its probability distribution, or simply without any knowledge on the input signal (performing then a so-called blind “tracking” phase).

[0069] Thus, the input signal probability distribution is to be shared with the transmitter, so as to optimize the channel capacity. The input signal probability distribution is to be shared with the receiver, so as to optimize the receiver's performance. The input signal probability distribution is to be shared with the channel conditional probability distribution estimator, so as to improve the channel conditional probability distribution estimation performance.

[0070] By improving the input signal probability distribution according to the channel knowledge, the channel capacity is improved, which further allows to improve the channel conditional probability distribution estimation, and so on. Thus, an iterative approach of jointly optimizing the channel conditional probability distribution estimation and input signal probability distribution is preferred. Another advantage of this approach is to allow tracking the changes of the input signal probability distribution, as their variation speed is not too high.

[0071] The channel conditional probability distribution estimator can implement an approximation of the channel conditional probability distribution p.sub.Y|X(y|x) by using a functional basis of probability distributions which is preferably of the exponential functions family. Indeed, it is shown below that specific calculations are easier to manipulate with such exponential functions.

[0072] As a general approach, a transmitter uses a finite set of symbols on a communication channel characterized by a conditional probability distribution. The conditional probability distribution can be approximated, for each possible symbol sent by the transmitter, by a mixture model with mixing probability distribution from the exponential family. The conditional probability distribution for each possible transmitted symbol of the finite set of symbols is estimated from at least output symbols, and using a collapsed Gibbs sampling relying on Dirichlet process. The result is in the form of mixture model and the number of components is learnt directly from the observation. The exponential family makes it possible to get simpler implementation of the Gibbs sampling.

[0073] The conditional probability distribution estimation is preferably computed at the receiver.

[0074] In a first embodiment, called “training mode”, it is considered that the symbols x are known at the receiver.

[0075] In a second embodiment, called “tracking mode”, the symbols x are unknown at the receiver. However, it is considered that the input signal probability distribution p.sub.X(x) is known and taken into account during the conditional probability distribution estimation step.

[0076] The two embodiments can be interleaved in time according to the cases where pilot symbols are sent, or when data has been correctly decoded and can be used as pilots signal for the channel estimation and approximation, or when data has not been correctly decoded.

[0077] In addition, the parameters of the mixture model as estimated by the conditional probability distribution estimator can be provided to an input distribution optimizer (according to the aforesaid optional embodiment) that defines optimized symbols positions and probabilities to be used at the transmitter for optimizing the capacity of the approximated channel. The optimized symbols positions and probabilities are then provided to the transmitter and receiver.

[0078] Referring to FIG. 1, a system according to a possible embodiment includes a transmitter 10, a receiver 11, a transmission channel 12, a channel conditional probability distribution estimator 13 and optionally an input signal probability distribution optimizer 14. The transmitter 10 transmits messages conveyed by a signal belonging to a finite set of signals, each associated with a transmission probability according to the (estimated or optionally optimized) input signal probability distribution. The transmitter 10 takes the messages and optionally the estimated (or optionally optimized) input signal probability distribution as inputs and outputs the signal to be transmitted on the channel 12. The channel 12 is taking the transmitted signal as an input and outputs a received signal which is processed at the receiver 11 in order to decode the transmitted message.

[0079] A channel conditional probability distribution of the probability of outputting a given signal, when the input is fixed, is computed. The probability distribution can generally be defined on a discrete or continuous input and/or output alphabet. It is preferably considered the continuous output alphabet, and the probability distribution is called a “probability density function” in this case.

[0080] The channel conditional probability distribution estimator 13 takes the received signal and the input signal or its estimated (or optionally optimized) probability distribution as inputs and outputs the channel conditional probability distribution. It is located preferably in the receiver 11, in the transmitter 10, or in an external computing device.

[0081] The input signal probability distribution optimizer 14 can take then the conditional probability distribution estimation as an input and outputs the optimized input signal probability distribution to the transmitter 10 and receiver 11, here. The conditional probability distribution estimation can be then used in this embodiment for computing the optimized input signal probability distribution at the input signal probability distribution optimizer 14. Furthermore, it is shown below that the optimization can be made more efficient when the conditional probability distribution estimation is approximated by a mixture of exponential distributions.

[0082] The receiver 11 takes the received signal, the optimized input signal probability distribution and the estimated channel conditional probability distribution as inputs and performs an estimation of the message conveyed in the received signal.

[0083] In this example of embodiment, the conditional probability distribution estimator 13 implements preferably the following steps as illustrated in FIG. 2: [0084] S21: Obtaining information relative to the transmitted symbols x, such as the symbols x themselves or the input signal distribution p.sub.X(x) used at the transmitter 10. [0085] The input signal distribution p.sub.X(x) can be provided by the input signal distribution optimizer, or [0086] The input signal distribution p.sub.X(x) can be arbitrarily fixed by the transmitter (as presented in an embodiment below); [0087] S22: Receiving a signal y transmitted through a channel characterized by channel conditional probability distribution p.sub.Y|X(y|x). [0088] The channel conditional probability distribution p.sub.Y|X(y|x) can be known, or [0089] The channel conditional probability distribution p.sub.Y|X(y|x) is unknown; [0090] S23: Estimating the channel conditional probability distribution, when it is unknown, from the received signal, the channel conditional probability distribution being approximated by a model of exponential distributions presented in details below; [0091] S24: Sending the estimated channel conditional probability distribution to the input signal distribution optimizer 14 (as a preferred embodiment) and to the receiver 11.

[0092] The input signal distribution optimizer 14 can implement in an exemplary embodiment the steps illustrated in FIG. 3, of: [0093] S31: Obtaining the estimated channel conditional probability distribution p.sub.Y|X(y|x) [0094] S32: Optimizing the input distribution p.sub.X(x) [0095] S33: Sending the optimized input signal distribution to the transmitter 10, receiver 11 and channel conditional probability distribution estimator 13.

[0096] The transmitter 10 preferably implements the steps illustrated in FIG. 4, of: [0097] S41: Obtaining the message to be sent [0098] S42: Obtaining the optimized input signal distribution p.sub.X(x) [0099] S43: Creating the signal to be transmitted, which conveys the message, according to the optimized input signal distribution p.sub.X(x). This can be done by using techniques of prior art, usually involving a distribution matcher and a coded modulation such as: [0100] LDPC (for “Low Density Parity Check”) as disclosed for example in: G. Bocherer, F. Steiner, and P. Schulte, “Bandwidth efficient andrate-matched low-density parity-check coded modulation” IEEE Trans. Commun., vol. 63, no. 12, pp. 4651-4665, December 2015, [0101] or Polar Codes as disclosed for example in: [0102] T. Prinz, P. Yuan, G. Bocherer, F. Steiner, O. iscan, R. Bohnke, W. Xu, “Polar coded probabilistic amplitude shaping for short packets” Int. workshop. On Signal Processing Advances in Wireless Communications (SPAWC), pp. 1-5, July 2017, [0103] S44: And transmitting the signal on the channel 12.

[0104] The receiver 11 can implement the steps illustrated in FIG. 5, of: [0105] S51: Obtaining the received signal y [0106] S52: Obtaining the optimized input signal distribution p.sub.X(x) [0107] S53: Obtaining the estimated channel conditional probability distribution p.sub.Y|X(y|x) [0108] S54: Estimate the transmitted message.

[0109] The purpose of the conditional probability distribution estimator 13 is to estimate the distribution p.sub.Y|X(y|x) based on an approximation of the channel probability distribution relying on a decomposition into a basis of probability distributions functions g(y|x; θ), where θ is a parameter set. The distribution function is the exponential family and the parameters are essentially the mean and variance for the scalar case, and more generally the mean vector and covariance matrix for the multivariate case. For an example, the parameter set 0 may contain the mean vector and covariance matrix of a so-called “multivariate normal distribution”. In another example, it may contain the shape parameter and spread parameter of a Nakagami distribution.

[0110] In general the considered functions g(y|x; θ) are written in the form


g(y|x;θ)=h(y,θ)exp(x.sup.Ty−a(x,θ)),

[0111] where x, h(y, θ) is a function of y and θ, and a(x, θ) is the moment generating function. (x and y are vectors in this general case).

[0112] The conjugate prior for the above exponential model is of the form of:


g(θ|λ)=g′(θ)exp(λ.sub.1.sup.Tθ−λ.sub.2a(θ)−b(λ.sub.1,λ.sub.2)),

[0113] in which λ={λ.sub.1, λ.sub.2}, g′(θ) is a scalar function of θ, parameter λ.sub.1 has the same dimension as θ and λ.sub.2 is a scalar, and b(λ.sub.1, λ.sub.2) is the moment generating function of the prior chosen such that the prior integrates to one.

[0114] Thus, the objective of the conditional probability distribution estimator 13 is to find the best approximation of p.sub.Y|X(y|x) in the aforementioned basis:


p.sub.Y|X(y|x)≅Σ.sub.j=1.sup.Nw.sub.jg(y|x;θ.sub.j)

[0115] where N, and the sets {θ.sub.j}, {w.sub.j} are parameters to be estimated. The weights w.sub.j are scalar parameters.

[0116] The function p.sub.Y|X(y|x) is bivariate with variables x and y which span in general in a continuous domain.

[0117] In a first case, it is assumed that the symbols x belong to a finite alphabet Ω={ω.sub.1, . . . , ω.sub.M} of cardinality M, which is especially fixed and known. In a first embodiment according to the “training mode”, the symbols x are known at the receiver. In a second embodiment according to the “tracking mode”, the symbols x are unknown at the receiver but the input signal probability distribution p.sub.X(x) is known.

[0118] In a second case, the symbols x might belong to a finite set of signals, but this set is not especially fixed, nor known. In a third embodiment according to the “training mode”, the symbols x are known at the receiver. In a fourth embodiment according to the “tracking mode”, the symbols x are unknown.

[0119] In the first case (common to the first and second embodiments above), the conditional probability distribution is fully characterized by

[00001] p Y .Math. X ( y | x ) = .Math. m = 1 M p Y | X ( y | x = ω m ) p X ( x = ω m )

[0120] where p.sub.X(x=ω.sub.m) is null everywhere except in ω.sub.m, where the value is equal to the probability that the symbol om, is transmitted. Thus, it is sought to approximate the M functions p.sub.Y|X(y|x=ω.sub.m) of only y for the M possible values of x.

[0121] This can be done in parallel by using M estimators, as it will be stated below.

[0122] Thus, it is now considered the following approximation:


p.sub.Y|X(y|x=ω.sub.m)≅Σ.sub.j=1.sup.N.sup.mw.sub.m,jg(y|θ.sub.m,j)  (1)

[0123] Here, each transition depends on the transmitted symbol (M possibility with M the cardinality of the known set of symbols in the first case shared by the first and second embodiments).

[0124] Solving this estimation problem of equation (1) is in general difficult. Indeed, the number N.sub.m providing the best approximation is unknown, which makes difficult the use of deterministic methods, such as the expectation maximization algorithm.

[0125] Several embodiments are possible for the conditional probability distribution estimation and differ according to assumptions on the knowledge of x or its statistics.

[0126] Embodiments are described below especially for the conditional probability distribution estimation performed by the estimator 13. As will be apparent below, these embodiments do not need necessarily any optimizer 14 (which is optional as indicated above).

[0127] In the first embodiment related to the “training mode”, it is assumed that the symbols x sent by the transmitter are known at the conditional probability distribution estimator 13. These symbols are used as pilot symbols in order to learn the best approximation as shown in FIG. 6.

[0128] In this first embodiment related to the training mode, as well as in a second embodiment related to a tracking mode detailed below, the optimizer 14 is not necessary, and the estimation of p.sub.Y|X(y|x) is finally needed at the receiver 11 only.

[0129] The training mode is activated at the transmitter 10 on given time/frequency resource known at the receiver 11, which also knows the sent message or equivalently x. Therefore, referring to FIG. 6, symbols x are provided especially to the estimator 13. Typically, the knowledge of x can also be obtained at the receiver after a correct decoding (data-aided training). This is made possible by checking the correctness of the decoded message through for example a Cyclic Redundancy Check.

[0130] When x is perfectly known at the receiver, solving equation (1) can be achieved by using the method described based on Gibbs sampling as disclosed for example in

[0131] NEAL00: Neal, Radford M. “Markov chain sampling methods for Dirichlet process mixture models.” Journal of computational and graphical statistics 9.2 (2000): 249-265.

[0132] It provides a computational efficient and high performance solution, especially when the probability distributions functions g(y|x=ω.sub.m; θ) belong to the exponential family. Gibbs sampling is a randomized algorithm, i.e., it uses generated random values to perform an approximation of a statistical inference instead of relying on a deterministic approach. In general, this involves an iterative approach. Using sampling allows reducing the complexity by manipulating a finite number of elements instead of continuous functions. Using Gibbs sampling in particular allows to efficiently manipulate complicated multivariate functions representing the multivariate probability distribution used in the statistical inference methods. Thus, the main principle for performing the estimation in (1) is to sample (i.e., obtain representative samples) from the posterior distribution on the parameters {θ.sub.m,j}, {w.sub.m,j} knowing observation samples y. In particular, a Dirichlet process can be used as the prior probability distribution in infinite mixture models. It is a distribution over the possible parameters of a multinomial distribution and has the advantageous property to be the conjugate prior of the multinomial distribution. This property helps in simplifying some steps of the Gibbs sampling. Furthermore, it is known to lead to few dominant components. Thus, by finding these N.sub.m dominant components, the approximation is compact and accurate.

[0133] Some adaptation is needed to use the teaching of NEAL00 to the present context. x is assumed to belong to a finite alphabet which is known at the receiver. Each observation y.sub.i is associated to a given symbol x.sub.i out of M possible values of x, and feeds one out of M parallel estimators according to the value of x.sub.i (the estimators are then one out of the three estimators described in [NEAL00]). Finally, M functions p.sub.Y|X(y|x=ω.sub.m) are obtained out of the M estimators. In particular, the parameters N.sub.m, {θ.sub.m,j}, {w.sub.m,j} are the output of each estimator and allow to characterize the M functions P.sub.Y|X(y|x=ω.sub.m).

[0134] FIG. 7 is a visual example of implementation where x belongs here to {+1, −1, . . . , −3}.

[0135] Details of an example of embodiment are given below relatively to the training mode, as shown on FIG. 11. In the first embodiment, it is possible to select only the received signals y when they are associated with a transmitted symbol x such as x=ω.sub.m.

[0136] The conditional probability distribution estimation is based on an approximation of the channel probability distribution relying on a decomposition into a basis of probability distributions functions g(y|θ.sub.m,j), where θ.sub.m,j is a parameter set, in step S10. The distribution function is the exponential family and the parameters are essentially the mean and variance for the scalar case, and more generally the mean vector and covariance matrix for the multivariate case.

[0137] In step S11, the conditional probability distribution estimator can find the best approximation of p.sub.Y|X(y|x=ω.sub.m) in the aforementioned basis


p.sub.Y|X(y|x=ω.sub.m)≅Σ.sub.j=1.sup.N.sup.mw.sub.m,jg(y|θ.sub.m,j)

[0138] where N.sub.m, and the sets {θ.sub.m,j}, {w.sub.m,j} are parameters to be estimated in step S12.

[0139] Solving this estimation problem is in general difficult. Indeed, the number N.sub.m providing the best approximation is unknown, which makes difficult the use of deterministic methods, such as the expectation maximization algorithm.

[0140] However, the use of clusters described below with reference to steps S121 to S124 of FIG. 11 and more generally the method described in NEAL00 based on Gibbs sampling provides a computational efficient and high performance solution, especially when the probability distributions functions g(y|x=ω.sub.m; θ.sub.m,j) belong to the exponential family. Gibbs sampling is a randomized algorithm: it uses generated random values to perform an approximation of a statistical inference instead of relying on a deterministic approach. In general, this involves an iterative approach.

[0141] Using sampling allows reducing the complexity by manipulating a finite number of elements instead of continuous functions. Using Gibbs sampling in particular allows managing efficiently complicated multivariate functions representing the multivariate probability distribution used in the statistical inference methods. Thus, the main principle for performing the estimation in (1) is to sample (i.e., obtain representative samples) from the posterior distribution on the parameters {θ.sub.m,j}, {w.sub.m,j} knowing observation samples y at the input of the m-th estimators, i.e., after selection of the observed symbols y associated to a transmitted symbol x=ω.sub.m, as known by the receiver, i.e., for time slots where it is known that the transmitted symbol x=ω.sub.m, the observed symbol feeds the m-th estimator.

[0142] In particular, a Dirichlet process can be used as the prior probability distribution in infinite mixture models. It is a distribution over the possible parameters of a multinomial distribution and has the property to the conjugate prior for the multinomial (this helps in simplifying some steps of the Gibbs sampling, as explained in [NEAL00]). Furthermore, it is known to lead to few dominant components. Thus, by finding these N.sub.m dominant components, the approximation is compact and accurate.

[0143] In order to determine these components in a tractable way, Monte Carlo methods can be applied (and more specifically Markov chain Monte Carlo), allowing to draw samples from probability distributions. More specifically, Gibbs sampling provide a relevant class of algorithms to implement the present disclosure. The Gibbs sampling method in general involves a numerical integration step, which is computational demanding. Fortunately, when the probability distributions functions g(y|x; θ) are conjugate prior distributions (which is the case for the exponential family), the integration can be performed in a closed form expression which significantly reduced the computation complexity. Since p.sub.Y|X(y|x=ω.sub.m) is a weighted sum of g(y|x; θ) functions, its conjugate priori is also known and computed efficiently. At the output of the Gibbs sampling, the value N.sub.m, and the sets {θ.sub.m,j}, {w.sub.m,j}, are estimated as shown as a general step S12 of FIG. 11.

[0144] In the present case, the collapsed Gibbs sampling of [NEAL00] is used in a particular embodiment shown in FIG. 11 and is applied in the particular case for the m-th estimator by: [0145] In step S10, considering a base g(y|θ) to be from the exponential distribution family. [0146] This allows considering the prior on θ to be conjugate to the exponential distribution with the parameter λ. This is a mathematical trick that allows to have closed form expressions for the following steps. [0147] λ={λ.sub.1, λ.sub.2} denotes the hyperparameter for the conjugate prior on the parameter. [0148] In step S11, a selection is performed of the observed symbols {y.sub.n} associated to the transmitted symbol x=ω.sub.m, where y.sub.n is n-th out of the selected observed symbols. The expression x.sub.1:n=ω.sub.m denotes the fact that all transmitted symbols considered here are the same and equal to ω.sub.m.

[0149] Then, the general step S12 can be decomposed into sub-steps S121 to S124, as follows: [0150] In step S121, considering N.sub.m clusters for the parameter sets {θ.sub.m,j}, {w.sub.m,j}: [0151] A cluster is used to group received symbols together. The clusters are implemented by associating an integer index from 1 to N.sub.m for each group of symbols, as described below. This cluster assignment approach can be justified by the fact that each received symbol is more likely associated to one out of the N modes composing p.sub.Y|X(y|x=ω.sub.m) in (1) [0152] The cluster assignment index for the i-th observed symbol is c.sub.7. [0153] Let c.sub.−i be the cluster assignment index for all clusters but the ith. [0154] In step S122, computing the posterior distribution p(c.sub.i=k|y.sub.1:n, c.sub.−i, λ, x.sub.1:n=ω.sub.m), where y.sub.1:n represents the input symbols of the received signal Y, under the assumption that the transmitted symbol is ω.sub.m. It can be shown that


p(c.sub.i=k|y.sub.1:n,c.sub.−i,λ,x.sub.1:n=ω.sub.m)=p(c.sub.i=k|c.sub.−i)p(y.sub.i|y.sub.1:i−1,y.sub.i+1:n,c.sub.−i,c.sub.i=k,λ,x.sub.1:n=ω.sub.m)  p(c.sub.i=k|c.sub.−i) is for example computed thanks to a Polya urn scheme  p(y.sub.i|y.sub.1:i−1, y.sub.i+1:n, c.sub.−i, c.sub.i=k, λ, x.sub.1:n=ω.sub.m)=∫p(y.sub.i|x.sub.i=ω.sub.m, θ)p(θ|y.sub.1:i−1, y.sub.i+1:n, c.sub.−i, c.sub.i=k, λ, x.sub.1:n=ω.sub.m)dθ which is computable because p(y.sub.i|x.sub.i=ω.sub.m, θ) is following the approximation Σ.sub.j=1.sup.N.sup.mw.sub.m,jg(y|θ.sub.m,j) in which g(y|θ) is from the exponential distribution family, which involves that the conjugate prior is known. [0155] In step S123, sampling from the posterior distribution is performed in order to update the parameters {θ.sub.m,j}, {w.sub.m,j}. [0156] It is also preferable to check if a new cluster must be created. This is helpful to determine the best value of N.sub.m that provides the best approximation of the channel conditional probability distribution. This is done in step S124, by first starting a new cluster, which is defined as


p(y.sub.i|x.sub.i=ω.sub.m,P.sub.0)=∫p(y.sub.i|x.sub.i=ω.sub.m,θ)dP.sub.0, [0157] where P.sub.0 is the conjugate to the p(y|x=ω.sub.m; θ.sub.m,j) (with parameter λ). Then it is randomly chosen to create a new cluster from this probability. Therefore, it is randomly chosen to create a new cluster from a probability verifying that probability.

[0158] At the output of this procedure, the parameters N.sub.m, {θ.sub.m,j}, {w.sub.m,j} are estimated and provide the approximation of the channel conditional probability distribution p.sub.Y|X(y|x=ω.sub.m) from (1). This procedure is repeated or performed in parallel for the M estimators.

[0159] The second embodiment is given below, implementing a “tracking mode” with statistical knowledge. This mode is efficiently used after the first embodiment. The transmitted symbols x cannot be known in this embodiment, but, as presented on FIG. 9 below, the knowledge of p.sub.X(x) can still be used in order to evaluate (1) and is provided then to the estimator 13 (as well as the receiver 11 and the transmitter 10). In fact, there is no selection step done form the observed symbols. However, the M estimators are still computed and all consider any observed symbol as an input. The purpose of the bayesian inference is to weight the observed symbol according to the belief they correspond to a transmission of the symbol x=ω.sub.m associated with the m-th estimator.

[0160] The main difference with the training mode is that the sent symbol x is unknown. However, the value of x can be inferred with a Bayesian approach, i.e., each observation y contributes to the update of p.sub.Y|X(y|x=ω.sub.m) with a weight corresponding to the probability of y being associated to ω.sub.m, from the current estimation on the condition probability distribution.

[0161] Details are given below regarding the second embodiment implementing the aforesaid tracking mode.

[0162] A clustering technique will be used. Let the cluster allocation labels be s.sub.i=j if (θ.sub.j) belongs to the j-th cluster. Let ρ.sub.n=[s.sub.1, . . . , s.sub.n].sup.T denote the vector of cluster allocation labels. Using the prediction in the Dirichlet Process (DP) by a covariate-dependent urn scheme, one obtains:

[00002] s n + 1 | ρ n , x 1 : n + 1 α α + n p ( x n + 1 = ω m ) p ( x n + 1 = ω m .Math. ρ n , x 1 : n ) δ k + 1 + .Math. j = 1 k n j α + n p ( x n + 1 = ω m ) p ( x n + 1 = ω m | ρ n , x 1 : n ) δ j ,

[0163] In order to update, for each estimator, the conditional probability distribution p.sub.Y|X(y|x=ω.sub.m) from the observed symbol, one can compute

[00003] p ( y n + 1 | y 1 : n , x 1 : n , x n + 1 = ω m ) = .Math. ρ n p ( ρ n | y 1 : n , x 1 : n , x n + 1 = ω m ) α α + n p ( x n + 1 = ω m ) p ( x n + 1 = ω m | ρ n , x 1 : n ) ζ 0 , y ( y | x n + 1 = ω m ) + .Math. j = 1 k n j α + n p ( x n + 1 = ω m ) p ( x n + 1 = ω m .Math. ρ n , x 1 : n ) ζ j , y ( y | x n + 1 = ω m ) ) ( 32 )

where ζ.sub.0,y(y|x.sub.n+1)=∫p(y|x.sub.n+1, θ)dP.sub.0, ζ.sub.j,y(y|x.sub.n+1)=p(y|x.sub.n+1, x.sub.j*, y.sub.j*).

[0164] The clusters can then be updated similarly as in the first embodiment. Details of the principle of the prediction in the Dirichlet Process (DP) by a covariate-dependent urn scheme are given for example in document: “Improving prediction from Dirichlet process mixtures via enrichment”, Sara K. Wade, David B. Dunson, Sonia Petrone, Lorenzo Trippa, Journal of Machine Learning Research (Nov. 15, 2013)

[0165] In the third embodiment, the symbols x are not assumed to belong to a finite alphabet. Thus, the bi-variate conditional probability p.sub.Y|X(y|x) must be learnt for all possible x values.

[0166] The conditional probability distribution estimation is based on an approximation of the channel probability distribution relying on a decomposition into a basis of probability distributions functions g(y|x, θ.sub.j), where θ.sub.j is a parameter set, in step S10. The distribution function is the exponential family and the parameters are essentially the mean and variance for the scalar case, and more generally the mean vector and covariance matrix for the multi-variate case. The conditional probability distribution estimator can find the best approximation of p.sub.Y|X(y|x) in the aforementioned basis


p.sub.Y|X(y|x)≅Σ.sub.j=1.sup.Nw.sub.jg(y|x,θ.sub.j)

where N, and the sets {θ.sub.j}, {w.sub.j} are parameters to be estimated.

[0167] The solution is relying on the same algorithm as in the first embodiment, with the main following differences: there is only one estimator instead of M. The transmitter will transmit symbols x that are known at the receiver. The performance will be improved by using these transmitted symbols from a pseudo-random generator that will generate x values on a domain of interest, for example following a Gaussian distribution. By knowing the initial parameters of the pseudo-random generator, both the transmitter and receiver can know the x symbols.

[0168] The estimator is using a collapsed Gibbs sampling approach as in the first embodiment. In the present case, the collapsed Gibbs sampling of NEAL00 is used in a particular embodiment shown again in same FIG. 11 and is applied in the particular case for the estimator by: [0169] In step S10, considering a base g(y|x, θ) to be from the exponential distribution family. [0170] This allows considering the prior on θ to be conjugate to the exponential distribution with the parameter λ. This is a mathematical trick that allows to have closed form expressions for the following steps. [0171] λ={λ.sub.2, λ.sub.2} denotes the hyperparameter for the conjugate prior on the parameter. [0172] Here, step S11 corresponds now to a selection of the observed symbols {y.sub.n} associated to the transmitted symbol {x.sub.n}

[0173] The general step S12 can be decomposed again into: [0174] In step S121, considering N clusters for the parameter sets {θ.sub.j}, {w.sub.j}. [0175] A cluster is used to group received symbols together. The clusters are implemented by associating an integer index from 1 to N for each group of symbols, as described below. This cluster assignment approach can be justified by the fact that each received symbol is more likely associated to one out of the N modes composing p.sub.Y|X(y|x) in (1) [0176] The cluster assignment index for the i-th observed symbol is c.sub.i. [0177] Let c.sub.−i be the cluster assignment index for all clusters but the ith. [0178] In step S122, computing the posterior distribution p(c.sub.i=k|y.sub.1:n, x.sub.1:n, c.sub.−i, λ), where y.sub.1:n represents the input symbols of the received signal y. It can be shown that


p(c.sub.i=k|y.sub.1:n,x.sub.1:n,c.sub.−i,λ)=p(c.sub.i=k|c.sub.−i)p(y.sub.i|y.sub.1:i−1,y.sub.i+1:n,x.sub.1:n,c.sub.−i,c.sub.i=k,λ)  p(c.sub.i=k|c.sub.−i) is for example computed thanks to the Polya urn scheme  p(y.sub.i|y.sub.1:i−1, y.sub.i+1:n, x.sub.1:n, c.sub.−i, c.sub.i=k, λ)=∫p(y.sub.i|x.sub.i, θ)p(θ|y.sub.1:i−1, y.sub.i+1:n, x.sub.1:n, c.sub.−i, c.sub.i=k, λ)dθ which is computable because p(y.sub.i|x.sub.i, θ) follows the approximation Σ.sub.j=1.sup.Nw.sub.jg(y|x, θ.sub.j) and g(y|x, θ) is from the exponential distribution family, which involves that the conjugate prior is known. [0179] In step S123, sampling from the posterior distribution is performed in order to update the parameters {θ.sub.m}, {w.sub.m}. [0180] It is also preferable to check if a new cluster must be created. This is helpful to determine the best value of N.sub.m that provides the best approximation of the channel conditional probability distribution. This is done in step S124, by first starting a new cluster, which is defined as


p(y.sub.i|x,P.sub.0)=∫p(y.sub.i|x,θ)dP.sub.0,

where P.sub.0 is the conjugate to the p(y|x; θ.sub.j) (with parameter λ), which is computable since p(y|x; θ.sub.j) is a weighted sum of g(y|x, θ.sub.j) functions belonging to the exponential family. Then it is randomly chosen to create a new cluster from this probability. Therefore, it is randomly chosen to create a new cluster from a probability verifying that probability.

[0181] Details of use of Polya urn scheme can be obtained from document: “Ferguson distributions via Polya urn schemes”, D. Blackwell and J. B. MacQueen, The Annals of Statistics, vol. 1, no. 2, pp. 353-355, 1973.

[0182] In a fourth embodiment related to a tracking mode related to the aforesaid second case, it is of interest to consider that the input does not belong to a finite alphabet and that the sent symbols are unknown. This mode is efficiently used after the third embodiment. The model for non-parametric joint density is defined as:


p.sub.Y,X(y,x)=Σ.sub.j=1.sup.Nw.sub.jg(y|x,θ.sub.j)p(x|ψ.sub.j).  (21)

[0183] The parameters {θ.sub.j}, {w.sub.j}, and {ψ.sub.j} denote the conditional density parameters that can be mean and covariance for the case of Gaussian density, magnitude, and the corresponding parameters for the input distribution, respectively. Generally speaking, the parameters (θ, ψ) are jointly obtained from the base measure of Dirichlet Process “DP”, i.e., (θ, ψ)˜DP(αP.sub.0θ×P.sub.0ψ) where a is a scaling parameter. The corresponding conditional probability density can be written in the following non-parametric form

[00004] p Y .Math. X ( y | x ) = .Math. j = 1 w j g ( y | x , θ j ) p ( x | ψ j ) .Math. j = 1 w i , p ( x .Math. ψ j ) ( 22 ) = .Math. j = 1 w j ( x ) g ( y | x , θ j ) . ( 23 )

[0184] It is worth noting that in the prediction phase considering the fact that the placement of the particles is fixed and optimized from the training: the denominator in (22) and p(x|ψ.sub.j) act as scaling factors.

[0185] A clustering technique will be used. Let the cluster allocation labels be s.sub.i=j if (θ.sub.i, ψ.sub.i) belongs to the j-th cluster. Let ρ.sub.n=[s.sub.1, . . . , s.sub.n].sup.T denote the vector of cluster allocation labels. Using the prediction in DP by covariate-dependent urn scheme, one obtains:

[00005] s n + 1 | ρ n , x 1 : n + 1 w k + 1 ( x n + 1 ) p ( x n + 1 | ρ n , x 1 : n ) δ k + 1 + .Math. j = 1 k w j ( x n + 1 ) p ( x n + 1 | ρ n , x 1 : n ) δ j , where ( 24 ) w k + 1 ( x n + 1 ) = α p ( x n + 1 .Math. θ ) d P 0 θ ( θ ) α + n , ( 25 ) w j ( x n + 1 ) = n j p ( x n + 1 | ψ ) p ( ψ .Math. x j * ) d ψ α + n , ( 26 )

in which n.sub.j is the number of subject indices in the j-th cluster.

[0186] It is further assumed that the set of particles is fixed, in the tracking phase of channel probability density estimation. This is a practical assumption as the location of the particles in the constellation are fixed in most of the practical applications. The notation x.sub.n+1∈x.sub.1:n is used here to emphasize that the new signal is transmitted from the set of fixed constellation points that was used during the training. The estimated density p(y|y.sub.1:n, x.sub.1:n, x.sub.n+1 ∈x.sub.1:n) for the new received signal y is obtained as


p(y|y.sub.1:n,x.sub.1:n,x.sub.n+1∈x.sub.1:n)=Σ.sub.ρ.sub.nΣ.sub.s.sub.n+1p(y|y.sub.1:n,x.sub.1:n,x.sub.n+1Σx.sub.1:n,ρ.sub.n,s.sub.n+1) ×p(s.sub.n+1|y.sub.1:n,x.sub.1:n,x.sub.n+1∈x.sub.1:n,ρ.sub.n) ×p(ρ.sub.n|y.sub.1:n,x.sub.1:n,x.sub.n+1∈x.sub.1:n),  (27)

[0187] where ρ.sub.n=[s.sub.1, . . . , s.sub.n].sup.T denotes the vector of cluster allocation labels with s.sub.i=j if (θ.sub.i, ψ.sub.i) belongs to the j-th cluster. Using (24), (27) can be simplified as

[00006] p ( y | y 1 : n , x 1 : n , x n + 1 x 1 : n ) = .Math. ρ n p ( ρ n | y 1 : n , x 1 : n , x n + 1 x 1 : n ) ( w k + 1 ( x n + 1 ) p ( x n + 1 | ρ n , x 1 : n ) ζ 0 , y ( y | x n + 1 ) + .Math. j = 1 k w j ( x n + 1 ) p ( x n + 1 | ρ n , x 1 : n ) ζ j , y ( y | x n + 1 ) ) , ( 28 )

where k is the number of groups in the partition ρ.sub.n, and


ζ.sub.0,y(y|x.sub.n+1)custom-characterp(y|x.sub.n+1,θ)dP.sub.0θ(θ),  (29)


ζ.sub.j,y(y|x.sub.n+1)custom-characterp(y|x.sub.n+1,ψ)p(ψ|x.sub.j*,y.sub.j*)dψ,  (30)

where x.sub.j*, y.sub.j* denote the set of inputs and outputs for the j-th cluster. Consequently, the estimated conditional probability density is obtained according to equation (28).

[0188] After any of the above embodiments, an estimation of the conditional density probability function p.sub.Y|X(y|x) is obtained. In the two first embodiment, it is obtained for a fixed constellation of transmitted symbols x, while for the third and fourth embodiment, it is known as a bivariate function of y and x, and in particular for any values of x.

[0189] This knowledge of p.sub.Y|X(y|x) is used in the receiver in order to compute the likelihood of each received symbol y assuming a transmitted symbol x. This likelihood is required in the maximum likelihood decoding of transmitted symbols by selection the symbol x maximizing the likelihood for any received symbol x. This likelihood is also a component for computing Log-Likelihood Ratios provided at the input of a soft-input decoder, such as in: [0190] J. Boutros, N. Gresset, L. Brunel and M. Fossorier, “Soft-input soft-output lattice sphere decoder for linear channels”, GLOBECOM '03. IEEE Global Telecommunications Conference (IEEE Cat. No. 03CH37489), San Francisco, Calif., 2003, pp. 1583-1587 vol. 3.

[0191] Thus, it is of interest for the receiver to know the estimation of p.sub.Y|X(y|x).

[0192] This invention can be advantageously used to provide a compact representation of p.sub.Y|X(y|x) through the parameters {w.sub.j} and {θ.sub.j} as provided by the channel conditional probability distribution estimator to the receiver.

[0193] In another option, the knowledge of p.sub.Y|X(y|x) can be advantageously used at the transmitter in order to optimize the input distribution. Indeed, from the knowledge of a given distribution p.sub.X(x) which is in most applications discrete with fixed position of x and varying probabilities, the capacity of the channel characterized by p.sub.Y|X(y|x) with input p.sub.X(x) can be evaluated. Thus, the input distribution can be optimized.

[0194] In a first case of such optimization, the input distribution providing the highest capacity among a predefined set of input distributions is selected. Such a set is for example obtained by sampling Gaussian input distributions with different variance values. The sampling is for example performed for positions of x following a QAM (e.g. 256 QAM) constellation. In a second case of such optimization, the set is provided by several known constellation, such as QPSK, 16-QAM, 64-QAM, 256QAM, 8-PSK, 32-PSK, and so on. In a third case of such optimization, the position of the symbols x and their associated probability is chosen randomly and the best found random constellation is selected after each capacity computation.

[0195] In a last case of such optimization, it is advantageous to select the function g(y|x, θ.sub.j) from an exponential families. Indeed, p.sub.Y|X(y|x) being the linear combination of such functions, it is possible to compute the derivative functions of p.sub.Y|X(y|x) for fixed values of x in a closed form. Thus, a gradient descent approach can be used in order to optimize the capacity with respect to the input distribution p.sub.X(x).

[0196] This invention can be advantageously used to provide a compact representation of p.sub.Y|X(y|x) through the parameters {w.sub.j} and {θ.sub.j} as provided by the channel conditional probability distribution estimator to the transmitter.