Systems and Methods for Transmitting Streaming Symbols using Instantaneous Encoding
20230261786 · 2023-08-17
Assignee
Inventors
Cpc classification
H04L1/0076
ELECTRICITY
H03M13/6312
ELECTRICITY
International classification
H04L1/00
ELECTRICITY
Abstract
Systems and methods for performing real-time feedback communication in accordance with various embodiments of the invention are disclosed. In many embodiments, instantaneous encoding is utilized for transmitting symbols from a streaming source over a DMC with feedback. In certain embodiments, instantaneous encoding is performed during the arriving period of the symbols. At time t, the encoder and the decoder calculate the priors of possible symbol sequences using the source distribution and the posteriors at time t−1. In a number of embodiments, the encoder and decoder then partition the evolving message alphabet into groups, so that the group priors are close to the capacity-achieving distribution. In contrast to the SED rule for symmetric binary-input channels, partitioning processes in accordance with several embodiments of the invention utilize group priors instead of group posteriors for the partitioning. In many embodiments, once the groups are partitioned, the encoder determines the index of the group that contains the true symbol sequence it received so far and uses the group index to determine the appropriate channel input.
Claims
1. A real-time feedback communication system, comprising: an encoder configured to: receive a plurality of symbols from a streaming source; perform an instantaneous encoding of each s-rubol in the plurality of symbols to generate channel inputs, vhere the instantaneous encoding of each symbol in the plurality of symbols occurs before the arrival of the next symbol in the plurality of symbols; transmit the generated channel inputs via a communication channel; receive feedback with respect to each transmission; and determine source posteriors in response to the feedback received with respect to each transmission; wherein performing the instantaneous encoding of each symbol in the plurality of symbols comprises: calculating source priors based upon feedback received with respect to a last transmission, where the source priors calculated by the encoder are calculated for all possible symbol sequences using a source distribution and the posteriors determined by the encoder in response to feedback received by the encoder with respect to the last transmission; partitioning a message alphabet into groups using a partitioning: rule based upon the source priors calculated by the encoder; determining an index of one of the groups that contains a sequence corresponding to symbols from the plurality of symbols that have been received by the encoder up to that point in time; forming a channel input based upon the determined index; and a receiver configured to: receive channel outputs via the channel; transmit feedback in response to the received channel outputs; decode message symbols based upon the received channel outputs; wherein decoding each received message symbol comprises: before receiving a next channel output, calculating source priors based upon at least one previously received channel output, where the source priors calculated by the decoder are calculated for all possible symbol sequences using the source distribution and source posteriors determined by the decoder; partitioning the message alphabet into groups using the partitioning rule based upon the source priors calculated by the decoder; upon receipt of the next channel output, calculating updated source posteriors for all possible sequences of source symbols using the source priors calculated by the decoder and the next channel output; decoding a next received message symbol based upon the next channel output and the groups obt aimed by the decoder using the partitioning rule; and forming feedback for transmission to the encoder.
2. The system of claim 1, wherein forming a channel input based upon the determined index of the group that contains the sequence corresponding to the symbols from the plurality of symbols received by the encoder up to that point in time comprises applying randomization to match a distribution formed based upon transmitted indexes to a capacity-achieving distribution.
3. The system of claim 1, wherein each generated channel input is independent of past channel outputs.
4. The system of claim 1, wherein the channel is a discrete memoryless channel.
5. The system of claim 1, wherein the channel is a degenerate discrete memoryless channel.
6. The system of claim 1, wherein the partitioning rule partitions the message alphabet into groups so that the source priors of the groups satisfy a predetermined criterion based upon a known capacity-achieving distribution.
7. The system of claim 6, wherein the predetermined criterion minimizes a difference between the source priors of the groups and the known capacity-achieving distribution.
8. The system of claim 6, wherein the predetermined criterion causes the source priors of the groups to be within a predetermined threshold of the known capacity-achieving distribution.
9. The system of claim 1, wherein partitioning, by the encoder, of the message alphabet into groups using the partitioning rule based upon the calculated priors comprises partitioning the message alphabet using a greedy heuristic algorithm.
10. The system of claim 1, wherein the partitioning rule is a type-based group partitioning rule that partitions the message alphabet based on types.
11. The system of claim 1, wherein decoding the message symbols from the channel outputs received via the channel further comprises using the partitioned groups to construct two sets by comparing the source priors of the groups with a known capacity-achieving distribution.
12. The system of claim 11, wherein decoding the message symbols from the channel outputs received via the channel further comprises determining probabilities for randomizing the channel output based upon the two sets.
13. The system of claim 1, wherein each of the plurality of symbols is a data packet.
14. The system of claim 1, wherein the decoder is further configured to learn a symbol arriving distribution online using past symbol arrival times.
15. The system of claim 1, wherein the source is a linear system and the decoder is part of a control system that is configured to provide control signals to the linear system.
16. The system of claim 1, wherein the encoder and the decoder utilize a common source of randomness that is used by the encoder to generate the channel inputs and by the decoder to decode message symbols.
17. The system of claim 1, wherein the encoder is further configured to transmit the channel input formed based upon the determined index prior to the receipt of the next message symbol from the plurality of symbols by the encoder front the streaming source.
18. The system of claim 1, wherein the message alphabet is an evolving message alphabet.
19. An encoder capable of use in a real-time feedback communication system, wherein the encoder is configured to: receive a plurality of symbols from a streaming source; perform an instantaneous encoding of each symbol in the plurality of symbols to generate channel inputs, where the instantaneous encoding of each symbol in the plurality of symbols occurs before the arrival of the next symbol in the plurality of symbols; transmit the generated channel inputs via a communication channel; receive feedback with respect to each transmission; and determine source posteriors in response to the feedback received with respect each transmission; wherein performing the instantaneous encoding of each symbol in the plurality of symbols comprises: calculating source priors based upon feedback received with respect to a last transmission, where the source priors are calculated for all possible symbol sequences using a source distribution and the posteriors determined by the encoder in response to feedback received by the encoder with respect to the last transmission; partitioning a message alphabet into groups using a partitioning rule based upon the source priors: determining an index of one of the groups that contains a sequence corresponding to symbols from the plurality of symbols that have been received by the encoder up to that point in time; forming a channel input based upon the determined index.
20. The encoder of claim 19, wherein the partitioning rule partitions the message alphabet into groups so that the priors of the groups satisfy a predetermined criterion based upon a known capacity-achieving distribution.
21. A decoder capable of use in a real-tune feedback conununication system, wherein the decoder is configured to: receive channel outputs via a channel; transmit feedback in response to the received channel outputs; decode message symbols based upon the received channel outputs; wherein decoding each received message symbol comprises: before receiving a next channel output, calculating source priors based upon at least one previously received channel output, where the source priors are calculated for all possible symbol sequences using the source distribution and source posteriors determined by the decoder; partitioning the message alphabet into groups using a partitioning rule based upon the source priors; upon receipt of the next channel output, calculating updated source posteriors for all possible sequences of source symbols using the source priors and the next channel output; decoding a next received message symbol based upon the next channel output and the groups obtained by the decoder using the partitioning rule; and forming feedback for transmission.
22. The decoder of claim 21, wherein the partitioning rule partitions the message alphabet into groups so that the priors of the groups satisfy a predetermined criterion based upon a known capacity-achieving distribution.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0040] The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
[0041] The description and claims will be more fully understood with reference to the following figures and data graphs, which are presented as exemplary embodiinents of the invention and should not be construed as a complete recitation of the scope of the invention.
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
DETAILED DESCRIPTION
[0052] Turning now to the drawings, systems and methods for performing real-time feedback communication using instantaneous encoding of symbols from a streaming source in accordance with various embodiments of the invention are illustrated. In many embodiments, an instantaneous encoding process is utilized that involves calculating priors based upon received feedback. In certain embodiments, the priors are used to partition a message alphabet into groups using a partitioning rule. In several embodiments, the instantaneous encoding process involves determining the index of the group that contains the sequence of symbols received by the encoder up to that point in time and then using the index to determine a channel input. In many embodiments, randomization can be applied the determined index so that the distribution of the transmitted channel inputs matches a capacity-achieving distribution. As can readily be appreciated, the specific processes utilized to perform instantaneous encoding of symbols from streaming sources in accordance with various embodiments of the invention depends upon the requirements of specific applications.
[0053] In several embodiments, the instantaneous encoding systems and methods utilize a partitioning rule that minimizes the distance between the group priors and a capacity-achieving distribution. In a number of embodiments, a SED partitioning rule is utilized that partitions groups so that the group priors are within a threshold difference of the capacity-achieving distribution. In certain embodiments, a type-based partitioning rule is used. In many embodiments, the practical implementation of the type-based codes described herein can enable instantaneous encoding with log-linear complexity. As can readily be appreciated, the specific partitioning rule that is utilized is largely dependent upon the requirements of specific applications.
[0054] In several embodiments, a JSCC reliability function-achieving code with block encoding, (e.g., the MaxEJS code or the SED code), is preceded by an instantaneous encoding phase implemented in accordance, which enables the system to overcome the detrimental effect due to the streaming nature of the source and can enable the system to achieve the same error exponent as if the encoder knew the entire source sequence before the. transmission,
[0055] In several embodiments, the encoder uses a JSCC reliability function-achieving codes that enables the encoder to transmit k symbols of a streaming source and stop. Jr many embodiments, the encoder uses an instantaneous SED code that enables a decoder to choose the decoding time and the number of symbols to decode on the fly, in this configuration, a communication system can empirically attain a positive anytime reliability, thus it can be used to stabilize an unstable scalar linear system with bounded noise over a noisy channel.
Notation
[0056] Before discussing real-time feedback communication systems in accordance with various embodiments of the invention in further detail, it is helpful to clarify the notation that is used herein.
[0057] log(.Math.) is the natural logarithm. Notation X←Y reads “replace X by Y”. For any positive integer q, we denote [q]{1, 2, . . . q}. We denote [q].sup.k the set of all q-ary sequences of length equal to k. For a possibly infinite sequence x={x.sub.1, x.sub.2, . . . }, we write x.sup.n={x.sub.1, x.sub.2, . . . x.sub.n} to denote the vector of its first n elements, and we write {x.sub.n}.sub.n=n.sub.
, we write
to denote that X.sub.k converges to α in probability, i.e., lim.sub.k.fwdarw.∞[|X.sub.k−α|≥ϵ]=0, ∀ϵ>0. For any set
, we denote by
(x) an indicator function that is equal to 1 if and only if x∈
. For two positive functions f,g:
.sub.+.fwdarw.
.sub.+, we write f(k)=o(k)) to denote
we write f(k)=O(g(k)) to denote
we write f(k)=Ω(g(k)) to denote
[0058] Having defined the notation that is utilized below, a discussion of real-time feedback communication systems in accordance with various embodiments of the invention follows.
Real-Time Feedback Communication Systems
[0059] A real-time feedback communication system with a streaming source in accordance with an embodiment of the invention is illustrated in
[0060] A streaming source 102 is a (Discrete Streaming Source) DSS when it emits a sequence of discrete source symbols S.sub.n∈[q], n=1, 2, . . . , at times t.sub.1≤t.sub.2≤. . . , where symbol S.sub.n that arrives at the encoder at time t.sub.n is distributed according to the source distribution
P.sub.S.sub.
[0061] Throughout the discussion that follows, it is assumed that the entropy rate of the DSS
is well-defined and positive; the first symbol S.sub.1 arrives at the encoder at time t.sub.11; both the encoder and the decoder know the symbol alphabet [q], the arrival times t.sub.1, t.sub.2, . . . , and the source distribution (1). The DSS reduces to the classical discrete source (DS) that is fully accessible to the encoder before the transmission if
t.sub.n=1, ∀n=1,2, (3)
[0062]
[0063] Operationally, symbol S.sub.n represents a data packet. We denote the number of symbols that the encoder has received by time t by
N(t)max{n:t.sub.n≤t,n=1,2, . . . }. (4)
[0064] Given a DSS with symbol arriving times t.sub.1, t.sub.2, . . . , we denote its symbol arriving rate by, assuming that the limit exists
The symbol arriving rate f=∞implies that the source symbols arrive at the encoder so frequently that the number of channel uses increases slower than the source length. For example, the DS (3) has f=∞. The symbol arriving rate f<∞ implies that the number of channel uses goes to infinity as the source length goes to infinity. For example, if one source symbol arrives at the encoder every λ≥1 channel uses, λ∈.sub.+, i.e.,
t.sub.n=λ(n−1)+1, (6)
then
[0065] We assume that the channel is a DMC with a single-letter transition probabilit distribution P.sub.Y|X:.fwdarw.
.
[0066] A DMC is non-degenerate if it satisfies
P.sub.Y|X(y|x)>0,∀.sub.x∈,y∈
. (8)
[0067] A DMC is degenerate if there exist y∈, x∈
, x′∈
, such that
P.sub.Y|X(y|x)>0, (9a)
P.sub.Y|X(y|x′)=0, (9b)
[0068] A BSC is a form of non-degenerate DMC. A BEC is a form of degenerate DMC.
[0069] The capacity of a DMC can be denoted by
and the maximum Kullback-Leibler (KL) divergence between its transition probabilities can be denoted by
[0070] Assumption (8) posits that C.sub.1 (11) is finite.
[0071] A DMC is symmetric if the columns in its channel transition probability matrix can be partitioned so that within each partition, all rows are permutations of each other, and all columns are permutations of each other.
[0072] The symbol arriving rate (5) can be measured with a unit time equal to a channel use.
[0073] Codes that can be used to transmit a DSS over a DMC with feedback in accordance with various embodiments of the invention are discussed below. In many embodiments, the codes that are utilized are variable-length joint source-channel codes with feedback. In a number of embodiments, a code with instantaneous encoding is ilized. In several embodiments, a code with block encoding is utilized.
Variable-Length Joint Source-Channel Codes with Feedback
[0074] A code with instantaneous encoding designed to recover the first k symbols of a DSS at rate R symbols per channel use and error probability ϵ in accordance with an embodiment of the invention can be defined. For a (q, {t.sub.n}.sub.n=1.sup.∞) DSS and a DMC with a single-letter transition probability distribution P.sub.Y|X:.fwdarw.
, a (k, R, ϵ) code with instantaneous encoding can be defined as follows:
1. a sequence of (possibly randomized) encoding functions f.sub.t:[q].sup.N(t)×.fwdarw.
, t=1, 2, . . . that the encoder uses to form the channel input
2. a sequence of decoding functions g.sub.t:.fwdarw.[q].sup.k, t=1, 2, . . . that the decoder uses to form the estimate
Ŝ.sub.t.sup.kg.sub.t(Y.sup.t); (13)
3. a stopping time η.sub.k adapted to the filtration generated by the channel output Y.sub.1, Y.sub.2, . . . that determines when the transmission stops and that satisfies
[0075] For any rate R>0, the minimum error probability achievable by rate-R codes with instantaneous encoding and message length k can be given by
ϵ*(k,R)inf{ϵ:∃(k,R,ϵ) code with instantaneous encoding}, (16)
[0076] For transmitting a DSS over a non-degenerate DMC with noiseless feedback via a code with instantaneous encoding, the JSCC reliability function for streaming can be defined as
[0077] If a DSS satisfies (3), i.e., a DS, a code with instantaneous encoding (i.e., causal code) reduces to a code with block encoding (i.e., non-causal code), and the JSCC reliability function for streaming (17) reduces to the JSCC reliability function for a fully accessible source.
[0078] E(R) (17) can be used to quantify the fundamental delay-reliability trade-off achieved by codes with instantaneous encoding. The reliability function is a classical performance metric that can be used to approximate that trade-off as
Although this approximation ignores the subexponential terms, it can still shed light on the finite-blocklength performance.
[0079] Similar to classical codes with block encoding, a (k, R, ϵ) code with instantaneous encoding can be designed to recover only the first k symbols of a DSS, and E(R) (17) can be achieved by a sequence of codes with instantaneous encoding indexed by the length of the symbol sequence k as k.fwdarw.∞. A code with instantaneous encoding that decodes the first k symbols at a time t≥t.sub.k with an error probability that decays exponentially with delay t−t.sub.k can be defined, for all k and t. Because the decoding time and the number of symbols to decode can be chosen on the fly, this code can be referred to as an anytime code and can be used to stabilize an unstable linear systema with bounded noise over a noisy channel with feedback. Anytime codes can be formally defined as follows.
[0080] For a (q, {t.sub.n}.sub.n=1.sup.∞) DSS and a DMC with a single-letter transition probability distribution P.sub.Y|X:.fwdarw.
. A (κ, α) anytime code includes:
1. a sequence of (possibly randomized) encoding functions similar to those defined above;
2. a sequence of decoding functions g.sub.t,k:.fwdarw.[q].sup.k indexed both by the decoding time t and the length of the decoded symbol sequence k that, the decoder uses to form an estimate Ŝ.sub.t.sup.k
g.sub.t,k(Y.sup.t) of the first k symbols at time t.
[0081] For all k=1, 2, . . . , t=1, 2 . . . , t≥t.sub.k, the error probability of decoding the first k symbols at time t ideally must satisfy
[Ŝ.sub.t.sup.k≠S.sup.k]≤
(18)
for some κ, α∈.sub.+.
[0082] The exponentially decaying rate α of the error probability in (18) can be referred to as the anytime reliability.
Instantaneous Encoding Phase
[0083] In a number of embodiments of the invention, the transmitter aims to transmit the first k source symbols of a DSS using an instantaneous encoding phase, using the encoding functions {f.sub.t}.sub.t=1.sup.t.sup..fwdarw.
and capacity-achieving distribution P*.sub.X, and a (q, {t.sub.n}.sub.n=1.sup.∞) DSS with distribution (1). The following are functions of the channel outputs,
where ρ.sub.i(Y.sup.t) and θ.sub.i(Y.sup.t) are the posterior and the prior of source sequence i∈[q].sup.N(t), respectively; π.sub.x(Y.sup.t−1) is the prior of the group (Y.sup.t−1) corresponding to channel input x∈
that we specify in (24) below. The probability distributions P.sub.S.sub.
[0084] Algorithm: The instantaneous encoding phase operates during times t=1, 2, . . . , t.sub.k.
[0085] At each time t, the encoder and the decoder first update the priors θ.sub.i(y.sup.t−1) for all i∈[q].sup.N(t). At symbol arriving times t=t.sub.n, n=1, 2, . . . , k the prior θ.sub.i(y.sup.t−1), i∈[q].sup.N(t) is updated using the posterior ρ.sub.i.sub.
θ.sub.i(y.sup.t−1)=P.sub.S.sub.
where i.sup.N(t−1) is the length-N(t−1) prefix of sequence i.
[0086] At times in-between arrivals, i.e., at t∈(t.sub.n,t.sub.n+1), n=1, 2, . . . , k−1, the prior θ.sub.i(y.sup.t−1) is equal to the posterior ρ.sub.i(y.sup.t−1) for all i∈[q].sup.N(t), i.e.,
θ.sub.i)(y.sup.t−1)=ρ.sub.i(y.sup.t−1) (23)
[0087] At each time t, once the priors are updated, the encoder and the decoder partition the message alphabet [q].sup.N(t) into || disjoint groups
such that for all x∈
, the group priors π.sub.x(y.sup.t−1) are close to the capacity-achieving distribution P*.sub.X(x); in a number of embodiments, the rule that is used to ensure closeness is
[0088] There always exists a partition of [q].sup.N(t) that satisfies the partitioning rule (24), since a partition obtained by an algorithm such as (but not limited to) the greedy heuristic algorithm satisfies it.
[0089] Using the partition , the encoder and the decoder can construct two sets by comparing the group priors
with the capacity-achieving distribution
:
(y.sup.t−)
{x∈
:π.sub.x(y.sup.t−1)≤P*.sub.X(x)}, (25)
(y.sup.t−)
{x∈
:π.sub.x(y.sup.t−1)>P*.sub.X(x)}, (26)
[0090] In a number of embodiments, the encoder and the decoder then randomize the channel input. In other embodiments, this randomization step is omitted. In the embodiments that do use randomization, the encoder and the decoder determine a set of probabilities for randomizing the channel input, such that for all
(y.sup.t−1),
x∈(y.sup.t−), it holds that
[0091] An algorithm for determining a set of probabilites that satisfies (27)-(28) is illustrated in
(y.sup.t−1) is denoted by
(1). The order of the elements in
(y.sup.t−1) is irrelevant.
[0092] For every group (y.sup.−1) with x∈
(y.sup.t−1), the algorithm illustrated in
(y.sup.t−1) with
(y.sup.t−1) to transfer probability p.sub.
(y.sup.t−1). The amount of probability p.sub.
(y.sup.t−1) to
(y.sup.t−1) is the smallest of {circumflex over (π)}.sub.
(y.sup.t−1) (or
(y.sup.t−1)). In this way,
are determined. At the end of the algorithm, {circumflex over (π)}.sub.
[0093] As noted above, the use of randomization and the algorithm illustrated in
[0094] The output of the encoder is formed as follows. The encoder first determines the group that contains the sequence S.sup.N(t) it received so far:
In the embodiments that do not use randomization, Z.sub.t is transmitted directly into the channel. In the embodiments that use randomization, an extra randomness is added to Z.sub.t to form the channel input X.sub.t as follows. The encoder outputs X.sub.t according to
[0095] The decoder also knows the randomization distribution P.sub.X.sub. (24), sets
(y.sup.t−1) and
(y.sup.t−1) (25) and probabilities
(27-28). Due to (25)-(30), the channel input distribution at time t=1, 2, . . . , t.sub.k, is equal to the capacity-achieving channel input distribution; i.e., for all y.sup.t−1∈
.sup..
P.sub.X.sub.
[0096] =[4]. The horizontal axis represents a partition of 4 groups. The vertical axis represents the prior probabilities of the groups. The source alphabet [q].sup.N(t) is partitioned into {
(y.sup.t−1)}.sub.x∈[4] such that the partitioning rule (24) is satisfied. Groups
(y.sup.t−1), x∈{1, 2} constitute
(y.sup.t−1) (26) and groups
(y.sup.t−1), x∈{3, 4} constitute
(y.sup.t−1) (25). The probabilities {p.sub.
[0097] Upon receiving the channel output Y.sub.t=y.sub.t at time t, the encoder and the decoder update the posteriors ρ.sub.i(y.sup.t) for all possible sequences of source symbols i∈[q].sup.N(t) using the prior θ.sub.i(y.sup.t−1), the channel output y.sub.t, and the randomization probability (30), i.e.,
where z(i) is the index of the group that contains sequence i, i.e., it is equal to the right side of (29) with S.sup.N(t)←i; P*.sub.Y is the channel output distribution induced by the capacity-achieving distribution P*.sub.X; (32) holds due to (31) and the Markov chain Y.sub.t−X.sub.t−(Z.sub.t,Y.sub.t−1)−S.sup.N(t).
[0098] It is important to appreciate that the randomization (25)-(30) of the instantaneous encoding phase is only used for analysis. Systems and methods in accordance with various embodiments of the invention can be utilized without performing the randomization step (25)-(30) in a process that involves transmitting the deterministic group index Z.sub.t (29), but at a cost of imposing stricter assumptions on the DSS.
[0099] From the perspective of encoding, the randomization (30) turns the encoding function ft into a stochastic kernel P.sub.X.sub.
[0100] The complexity of the instantaneous encoding phase is O(q.sup.N(t) log q.sup.N(t)) if the classical greedy heuristic algorithm is used for group partitioning (24). A inure efficient algorithm that can be utilized in accordance with many embodiments of the invention to reduces the complexity down to O(t log t) is discussed below. While that algorithm can. be applied to any source distribution, it achieves optimum performance for equiprobably distributed source symbols.
JSCC Reliability Function
[0101] In this section, a JSCC reliability function for streaming E(R) (17) using the instantaneous encoding phase introduced above is presented. For brevity, the maximum and the minimum channel transition probabilities of a DMC P.sub.Y|X:.fwdarw.
are denoted by
and the maximum symbol arriving probability of the DSS (1) is denoted by
[0102] For a non-degenerate DMC with capacity C (10), maximum KL divergence C.sub.1 (11), and maximum channel transition probability p.sub.max (33) and a (q,{t.sub.n}.sub.n=1.sup.∞) DSS with entropy rate H>0 (2) and symbol arriving rate f (5), then, the JSCC reliability function for streaming (17) is equal to
[0103] The converse proof and the achievability proof for the above JSCC reliability function can be found in U.S. Provisional Patent Application Ser. No. 63/306,185, the relevant disclosure from which, including the converse proof and the achievability proof, is incorporated herein by reference in its entirety.
[0104] For any DSS with f=∞, including the DS (3), the buffer-then-transmit code for k source symbols can operate as follows, It waits until the k-th symbol arrives at time t.sub.k, and at times t≥t.sub.k+1, applies a JSCC code with block encoding for k symbols S.sup.k of a (fully accessible) DS with prior P.sub.X.sub.
which reduces to E(R) (361 for f=∞. Indeed, f=∞ means that the arrival time t.sub.k is negligible compared to the blocklength. The buffer-then-transmit code fails to achieve E(R) (36) if f<∞.
[0105] For any DSS with f<∞, the code with instantaneous encoding for k source symbols implements the instantaneous encoding phase at times t=1, 2, . . . , t.sub.k and operates as a JSCC code with block encoding for k symbols S.sup.k of a (fully accessible) DS with prior P.sub.S.sub.
[0106] Remarkably, it can be established that the JSCC reliability function for a streaming source can be equal to that for a fully accessible source. This is surprising as this means that revealing source symbols only causally to the encoder can in many instances have no detrimental effect on the reliability function.
[0107] While the instantaneous encoding phase can achieve E(R) (36) in fact, any coding strategy during the symbol arriving period that satisfies
achieves E(R) (36) when followed by a JSCC reliability function-achieving code with block encoding.
[0108] For equiprobably distributed q-ary source symbols that arrive at the encoder one by one at consecutive times t=1, 2, . . . , k and a symmetric q-input DMC, uncodec transmission during the symbol arriving period t=1, 2, . . . k satisfies (38) and thus constitutes an appropriate instantaneous encoding phase for that scenario. Furthermore, even if the instantaneous encoding phase drops the randomization (25)-(30) and transmits Z.sub.t (29) as the channel input, it can continue to satisfy the sufficient condition (38).
[0109] For a non-degenerate DMC with the maximum and the minimum channel transition probabilities p.sub.max and p.sub.min, and for a (q,{t.sub.n}.sub.n=1.sup.∞) DSS with maximum symbol arriving probability p.sub.S,max<1 and symbol arriving rate f<∞, if the DSS satisfies
(b′) the symbol arriving rate is large enough:
then the instantaneous encoding phase that transmits the non-randors ized Z.sub.t (29) as the channel input at each time t=1, 2, . . . , t.sub.k satisfies (38), which means that it can achieve E(R) (36), the JSCC reliability function for streaming, when followed by a JSCC reliability function-achieving code with block encoding.
[0110] While various approaches to performing instantaneous encoding are described above, a variety of additional approaches that utilize alternative partitioning rules in accordance with various embodiments of the invention are discussed further below.
Instantaneous SED Codes
[0111] Systems and methods in accordance with many embodiments of the invention utilize an SED code for a symmetric binary-input DMC. It can be shown by simulations that the instantaneous SED code empirically achieves a positive anytime reliability, and thus can be used to stabilize an unstable linear system with bounded noise over a noisy channel. Furthermore, it can be shown that if the instantaneous SED code is restricted to transmit the first k symbols of a DSS, a sequence of instantaneous SED codes indexed by the length of the symbol sequence k also achieves E(R) (36) for streaming over a symmetric binary-input DMC.
Algorithm of the Instantaneous SED Code
[0112] In many embodiments, an instantaneous SET) code is utilized that is almost the same as the instantaneous encoding phase described above, expect that 1) it particularizes the partitioning rule (24) to the instantaneous SED rule in (40)-(41) (below; 2) its encoder does not randomize the channel input and transmits Z.sub.t (29) at each time t; and 3) it continues to operate after the symbol arriving period. Fixing a symmetric binary-input DMC P.sub.Y|X:{0,1}.fwdarw. and fixing (q,{t.sub.n}.sub.n=1.sup.∞) DSS, the algorithm of the instantaneous SED code can be implemented as follows in several embodiments of the invention.
Algorithm: The Instantaneous SED Code Operates at Times t=1, 2, . . .
[0113] At each time t, the encoder and the decoder first update the priors θ.sub.i(y.sup.t−1) for all possible sequences i∈[q].sup.N(t) that the source could have emitted by time t. If t=t.sub.n, n=1, 2, . . . , the prior is updated using (22); otherwise, the prior is equal to the posterior (23).
[0114] Once the priors are updated, the encoder and the decoder partition the source alphabet i∈[q].sup.N(t) into 2 disjoint groups {}.sub.x∈{0,1} according to the instantaneous SED rule, which says the following: if x, x′∈{0,1} satisfy
π.sub.x(y.sup.t−1)≥π.sub.x′(y.sup.t−1), (40)
then they must also satisfy
[0115] There always exists a partition {(y.sup.t−1)}.sub.∈{0,1} that satisfies the instantaneous SED rule (40)-(41) since the partition that attains the smallest difference |π.sub.0(y.sup.t−1)−π.sub.1(y.sup.t−1)| can be shown to satisfy it.
[0116] Once the source alphabet is partitioned, the encoder can transmit the index Z.sub.t (29) of the group that contains the true source sequence S.sup.N(t) as the channel input.
[0117] Upon receiving the channel output Y.sub.t=y.sub.t at time t, the encoder and the decoder update the posteriors ρ.sub.i(y.sup.t) for all i∈[q].sup.N(t) using the priors θ.sub.i(y.sup.t−1) and the channel output y.sub.t, i.e.,
where z(i) is the index of the group Mat contains sequence i, i.e, it is equal to the right side of (29) with S.sup.N(t)←i.
[0118] The maximum a posteriori (MAP) decoder estimates the first k symbols at time t as
Ŝ.sub.t.sup.k.sub.i∈[q].sub.
[0119] The group partitioning rule in x(40)-(41) can be referred to as the instantaneous small-enough difference (SED) rule since it reduces to the SED rule if the source is fully accessible to the encoder before the transmission. The instantaneous SED rule causes the difference between a group prior π.sub.x(y.sup.t−1) and its corresponding capacity-achieving probability
to be bounded by the source prior on the right side of (41).
[0120] Even though the algorithm of the instantaneous SED code is presented for a DSS with deterministic symbol arriving times, it can be used to transmit a DSs with random, symbol arriving times. In that case, the number of symbols N(t) that have arrived by time t is a random variable, and the decoder only knows the symbol arriving distribution {P.sub.S.sub.
[0121] Systems and methods in accordance with embodiments of the invention can operate even when the the decoder knows neither the symbol arriving times nor the symbol arriving distribution. In this case, the decoder is configured to learn the symbol arriving distribution online using the past symbol arriving times.
Instantaneous SED Codes are Anytime Codes
[0122] An instantaneous SED code can be shown to be an anytime code through numerical evidence: it empirically attains an error probability that decreases exponentially as (18).
[0123] In [Ŝ.sub.t.sup.k≠S.sup.k] of decoding the first k symbols of a DSS at time t achieved by the type-based instantaneous SET) code. The DSS emits a Bernoulli(½) bit, a times t=1, 2, . . . . The channel is a BSC(0.05).
[0124] At each time t, a process generates a Bernoulli(½) source it and a realization of a BSC(0.05), runs these experiments for 10.sup.5 trials, and obtains the error probability (18) by dividing the total number of errors by the total number of trials. To reduce the implementation complexity, a type-based version of the instantaneous SED code is simulated, which has a log-linear complexity. The type-based version is an approximation of the exact instantaneous SED code since it uses an approximating instantaneous SED rule and an approximating decoding rule to mimic the instantaneous SED rule (40)-(41) and the MAP decoder (43), respectively, however, it can perform remarkably close to the original instantaneous SED code.
[0125] The slope of the curves corresponds to the anytime reliability α (18) of the instantaneous SED code. The anytime reliability for the source and the channel in
[0126] In a number of embodiments, an unstable scalar linear system can be stabilized by a system that utilizes instantaneous encoding including (but not limited to) the use of an instantaneous SED code. Consider the scalar linear system controlled over a noisy channel with noiseless feeqback that is displayed in
is the bounded noise, and the initial state is Z.sub.10. At time t, the observer uses the observed states Z.sub.t as well as the past channel feedback Y.sup.t−1 to form a channel input X.sub.t; the controller uses the received channel outputs Y.sub.t to form a control signal U.sub.t. For a (q,{t.sub.n}.sub.n=1.sup.∞) DSS that emits source symbols one by one at consecutive times t.sub.n=n, n=1, 2, . . . , the anytime rate of a (κ, α) anytime code can be defined as R.sub.any=log q nats per channel use, e.g., for the DSS in
[|Z.sub.t|.sup.η] stays finite at all times, provided that C.sub.any(α)>log λ, α>θ log λ. Thus, the instantaneous SED code can be used to stabilize the η-th moment of the unstable scalar linear system in
E.g., if η=2, then λ<1.09.
[0127] A control scheme that can stabilize the system in heretofore. As a result of applying U.sub.t, the actual state Z.sub.t+1 is forced close to the bounded virtual state
. The exponentially decaying with t−k error probability of decoding Ū.sub.t.sup.k achieved by the anytime code together with the bounded
[|Z.sub.t+1|.sup.η]. In fact, the full feedback channel in
[0128] As verified by the simulations in
Instantaneous SED Code Achieves E(R)
[0129] The instantaneous SED codes described above can be restricted to transmit only the first it source symbols of a DSS, so that a sequence of instantaneous SED codes can be indexed by the length of the symbol sequence k. It can then be shown that the code sequence achieves the JSCC reliability function (36) for streaming over a symmetric binary-input DMC as k.fwdarw.∞.
[0130] The instantaneous SED code can be restricted to transmit the first k symbols of a (q,{t.sub.n}.sub.n=1.sup.∞) DSS as follows. [0131] 1)The alphabet [q].sup.N(t) that contains all possible sequences that could have arrived by time t is replaced by the alphabet [q].sup.min{N(t)k} that stops evolving and reduces to [q].sup.k after all k symbols arrive at time t.sub.k. As a consequence, for t≥t.sub.k+1 and all i∈[q].sup.k, the priors θ.sub.i(y.sup.t−1) are equal to the corresponding posteriors ρ.sub.i(y.sup.t−1), the encoder and the decoder partition [q].sup.k to obtain {(y.sup.t−1)}.sub.x∈{0,1}, the encoder transmits the index of the group that contains S.sup.k, and only the posteriors ρ.sub.i(y.sup.t) are updated. [0132] 2) The transmission is stopped and the MAP estimate (43) of S.sup.k is produced at the stopping time
[0133] The MAP decoder (43) together with the stopping rule (46) can be utilized to enforce the error constraint in (15), since the MAP decoder (43) implies [Ŝ.sub.η.sub.
[
[
{Ŝ.sub.η.sub.
[max.sub.i∈[q].sub.
[0134] As discussed above, a JSCC reliability function-achieving code with instantaneous encoding can be obtained by preceding a JSCC reliability function-achieving code with block encoding by an ins,a,ntaneous encoding phase that satisfies (38).
Low-Complexity Codes with Instantaneous Encoding
[0135] Type-based algorithms for the instantaneous encoding phase are described above, for the instantaneous SET) code as an anytime code, and for the instantaneous SED code restricted to transmit k symbols only. The type-based instantaneous encoding phase is the exact phase, whereas the type-based instantaneous SED codes are approximations of the original codes. Type-based codes that are discussed below and can be utilized by communication systems in accordance with various embodiments of the invention have a log-linear complexity O(t log t) in time t.
[0136] In a number of embodiments, it is assumed that the source symbols of the DSS are equiprobably distributed, i.e., the source distribution (1) satisfies
for all a|[q], b|[q].sup.n−1, n=1, 2, . . . Note that the algorithms will continue to apply even if the source distribution does not satisfy (47); in that case, optimality of the resulting codes cannot be expected, but we can still expect reasonable performance in practice.
[0137] In these type-based codes, the evolving source alphabet is judiciously divided into disjoint sets that can be called types, so that the source sequences in each type share the same prior and the same posterior. Here, the same prior is guaranteed by the equiprobably distributed symbols (47), and the same posterior is guaranteed by moving a whole type to a group during the group partitioning process (see step (iii) below). A.s a consequence of classifying source sequences into types, the prior update, the group partitioning, and the posterior update can be implemented in terms of types rather than individual source sequences, which can result in an exponential reduction of complexity.
[0138] A sequence of types can be denoted by ,
, . . . . The notation is slightly abused to denote by
(Y.sup.t−1) and
(Y.sup.t) the prior and the posterior of a single source sequence in type
at timet rather than the prior and the posterior of the whole type. In many embodiments, the type-based code is utilized. in a system that operates in combination with a (q,{t.sub.n}.sub.n=1.sup.∞) DSS that satisfies (47) and over a DMC with a single-letter transition probability P.sub.Y|X:
.fwdarw.
.
Type-Based Instantaneous Encoding Phase
[0139] The type-based instantaneous encoding phase can operate at times t=1, 2, . . . t.sub.k, where k is the number of source symbols of a DSS that it is desired to transmit. [0140] (i) Type update: At each time t, the algorithm first updates the types. At t=1, the algorithm is initialized with one type [q].sup.N(1). At t=t.sub.n, n=2, . . . , k, the algorithm updates all the existing types by appending every sequence in [q].sup.N(t)−N(t−1) to every sequence in the type. After the update, the length of the source sequences in each type is equal to N(t); the cardinality of each type is multiplied by [q].sup.N(t)−N(t−1); the total number of ts remains unchanged. At t≠t.sub.n, n=1, 2, . . . , k, the algorithm does not update the types. [0141] (ii) Prior update: Once the types are updated, the algorithm can proceed to update the prior of the source sequences in each existing type. The prior
(y.sup.t−1), j=1, 2, . . . of the source sequences in type
is fully determined by (22) with η.sub.i(y.sup.t−1)←
(y.sup.t−1),
and ρ.sub.iN(t−1)(y.sup.t−1)←(y.sup.t−1). If the types are not updated, the priors are equal to the posteriors, i.e.,
(y.sup.t−1)←
(y.sup.t−1), j=1, 2, . . . [0142] (iii) Group partitioning: Using all the existing types and their priors, the algorithm can determine a partition
that satisfies the partitioning rule (24) via a type-based greedy heuristic algorithm. It operates as follows. It initializes all the groups
by empty sets and initializes the group priors
by zeros. It forms a queue by sorting all the existing types according to priors
(y.sup.t−1), j=1, 2, . . . in a descending order. It moves the types in the queue one by one to one of the groups
. Before each move, it first determines a group
(y.sup.t−1) whose current prior π.sub.x*(y.sup.t−1) has the largest gap to the corresponding capacity-achieving probability P*.sub.X(x*),
Suppose the first type in the sorted queue,i.e. the type whose sequences have the largest prior, is . It then proceeds to determine the number of sequences that are moved from type
to group
(y.sup.t−1) by calculating
[0143] If n≥||, then it moves the whole type
to group
(y.sup.t−1); otherwise, it splits
into two types by keeping the smallest or the largest n consecutive (in lexicographic order) sequences in
and transferring the rest into a new type, and it moves type
to group
(y.sup.t−1) and moves the new type to the beginning of the queue. This step can result in all sequences in a type being consecutive. Thus, it is sufficient to store two sequences, one with the smallest and one with the largest lexicographic orders, in a type to fully specify that type. It updates the prior π.sub.x*(y.sup.t−1) after each move. [0144] (iv) Randomization: in some embodiments, the type-based instantaneous encoding algorithm implements the randomization in (25)-(30) with respect to a partition
. In other embodiments, the randomization step is dropped. [0145] (v) Posterior update: Upon receiving the channel output Y.sub.t=y.sub.t, the algorithm updates the posterior of the source sequences in each existing type. The posterior ρ
(y.sup.t), j=1, 2, . . . of the source sequences in type
is fully determined by (32) with ρ.sub.i(y.sup.t)←ρ
(y.sup.t), θ.sub.i(y.sup.t−1)←
(y.sup.t−1).
[0146] Using (49), it can be concluded that the type-based greedy heuristic algorithm achieves (24).
[0147] It can also be shown that the complexity of the type-based instantaneous encoding phase is log-linear O(t log t) at times t=1, 2, . . . , t.sub.k. In order to establish the complexity, it must first be shown that the number of types grows linea O(t). Since the type update in step (i) does not add new types, the number of types increases only due to the split of types during group partitioning in step (iii). At most || types are split at each time. This is because the ceiling in (49) ensures that the group that receives the it sequences from a split type will have a group prior no smaller than the corresponding capacity-achieving probability, thus the group will no longer be the solution to the maximization problem (48) and will not cause the split of other types. The complexity of each step of the algorithm can be analyzed. Step (i) (type update) has a linear complexity in the number of types, i.e., O(t). This is because the methods of updating: and splitting a type in steps (i) and (iii) causes the sequences in any type to be consecutive, thus it is sufficient to store the starting and the ending sequences in each type to fully specify all the sequences in that type. As a result, updating a type is equivalent to updating the starting and the ending sequences of that type. Step (ii) (prior update) and step (v) (posterior update) have a linear complexity in the number of types, i.e., O(t). Step (iii) (group partitioning) has a log-linear complexity in the number of types due to type sorting, i.e., O(t log t). This is because the average complexity of sorting a sequence of numbers is typically log-linear in the size of the sequence. Step (iv) (randomization) has complexity O(1) due to determining
in (27)-(28).
Type-Based Instantaneous SED Codes
[0148] In a number of embodiments, a type-based anytime instantaneous SED code is utilized for a symmetric binary-input DMC that operates at times t=1, 2, . . . : [0149] (i′) Type update: At each time t, the algorithm updates types as in step (1) with k=∞. [0150] (ii′) Prior update: The algorithm updates the prior of the source sequences in each existing type as in step (ii) with k=∞. [0151] (iii′) Group partitioning: Using all the existing types and their priors, the algorithm determines a partition {(y.sup.t−1)}.sub.x∈{0,1} using an approximating instantaneous SED rule that mimics the exact rule in (40)-(41) as follows. It forms a queue by sorting all the existing types according to priors
(y.sup.t−1), j=1, 2, . . . in a descending order. It moves the types in the queue one by one to
(y.sup.t−1) until π.sub.0(y.sup.t−1)≥P*.sub.X(0)=0.5 for the first time. Suppose the last type moved to
(y.sup.t−1) is
. To make the group priors more even, it then calculates the number of sequences n to be moved away from
as
It splits into two types bye transferring the first or the last n (50a) lexicographically ordered sequences in
to a new type. It moves the new type and all the remaining types in the queue to
(y.sup.t−1). [0152] (iv′) The randomization step in (iv) is dropped. [0153] (v′) Posterior update: The algorithm updates the posteriors of the source sequences in each existing type. The posterior
(y.sup.t−1), j−1, 2, . . . , is fully determined by (42) with ρ.sub.i(y.sup.t)←
(y.sup.t), θ.sub.i(y.sup.t−1)←
(y.sup.t−1). [0154] (vi′) Decoding at time t: To decode the first k symbols at time t, here k can be any integer that satisfies t.sub.k≤t, the algorithm first finds the type whose source sequences have the largest posterior. Their, it searches for the most probable length-k prefix in that type by relying on the fact that sequences in the same type share the same posterior; thus, the prefix shared by the maximum number of sequences is the most probable one. Namely, the algorithm extracts the length-k prefixes of the starting and the ending sequences, denoted by i.sub.start.sup.k and i.sub.end.sup.k, respectively. If i.sub.start.sup.k=i.sub.end.sup.k (
[0155] Referring to
[0156] The complexity of the type-based anytime instantaneous SED code is O(t log t). Similar to the type-based instantaneous encoding phase discussed above, the number of types grows linearly with time t since the number of types increases only if a type is split in step (iii′) and at most 1 type is spilt at each time t. The complexities of steps (i′), (ii′), (v′) are all linear in the number of types O(t). The complexity of step (iii′) is log-linear in the number of types O(t log t) due to sorting the types. Since the sequences in a type are lexicographically consecutive due to the updating and the splitting methods in steps (i′) and (iii′), it suffices to use the starting and the ending sequences in a type to determine the most probable prefix in that type. Thus, the complexity of step (vi′) is linear in the number of types due to searching for the type whose sequences have the largest posterior.
[0157] Restricting the type-based anytime instantaneous SED code described above to transmit only the first k symbols of a DSS is equivalent to implementing steps (i), (ii), (iii′), (v) one by one, and performing decoding as follows. [0158] (vi″) Decoding and stopping: If there exists a type that satisfies
(y.sup.t)≥1−ϵ and contains a source sequence of length k, then the decoder stops and outputs a sequence in that type as the estimate Ŝ.sub.η.sub.
[0159] The complexity of the type-based instantaneous SED code for transmitting k symbols remains log linear, O(t log t), since the complexity of step (vi″) is O(t) due to searching for the type that satisfies the requirements.
[0160] While the type-based instantaneous enc Odin phase described above is the exact algorithm of the instantaneous encoding phase, the type-based anytime instantaneous SED code and the type-based instantaneous SED code for transmitting k symbols are approximations of the algorithms described above due to two reasons below:
[0161] First, in step (iii′) (group partitioning), the algorithm uses the approximating instantaneous SED rule to mimic the exact rule in (40)-(41). The minimum of the objective function in (50a) is equal to the difference |π.sub.0(y.sup.t−1)−π.sub.1(y.sup.t−1)| between the group priors of the partition {(y.sup.t−1)}.sub.x∈{0,1} obtained by the approximating rule in step (iii′). The difference is upper bounded as
|π.sub.0(y.sup.t−1)−π.sub.1(y.sup.t−1)|≤(y.sup.t−1), (51)
where is the last type moved to
(y.sup.t−1) so that its group prior exceeds 0.5 for the first time. If |π.sub.0(y.sup.t−1)≥π.sub.1(y.sup.t−1), (51) recovers (41) since
(y.sup.t−1) is the smallest prior in
(y.sup.t−1), thus the approximating instantaneous SED rule recovers the exact rule. If |π.sub.0(y.sup.t−1)<π.sub.1(y.sup.t−1),
(y.sup.t−1) on the right side of (51) is the largest prior in
(y.sup.t−1), violating the right side of (41).
[0162] In a number of embodiments, an approximating algorithm of the instantaneous SED rule (40)-(41) is used since it is unclear how to implement the exact instantaneous SED rule with polynomial complexity. in the worst case, the complexity of the latter is as high as double exponential O(2.sup.q.sup.
[0163] Second, in step (vi′) (decoding at time t) of the type-based anytime instantaneous SED code, the algorithm only finds the most likely length-k prefix in the type that achieves maxi (y.sup.t−1), yet it is possible that this prefix is not the one that has the globa:lfy Largest posterior (43). To search for the most probable length-k prefix, one needs to compute the posteriors for all q.sup.k prefixes of length k using O(t) types; resulting in an exponential complexity O(q.sup.kt) in the length of the prefix k, whereas the complexity of step (vi′) is only O(t) independent of k.
[0164] Although the type-based instantaneous SED code is an approximation; as shown in
Simulations
[0165]
is displayed as a function of source length k empirically attained by the instantaneous encoding phase followed by the SED code and the instantaneous SEP code described above; and achievable rates are compared to that of the SEP code for a fully accessible source; as well as to that of a buffer-then-transmit code that implements the SED code during the block encoding phase. We also plot the rate R.sub.k obtained from the reliability function approximation (17):
The instantaneous encoding phase followed either by the MaxEJS code or by the SED code achieves the JSCC reliability function for streaming (36). For the simulations in [η.sub.k] of the empirical rate by averaging the stopping times in all the experiments.
[0166] It can be observed from
[0167] The rate obtained from reliability function approximation (52) is remarkably close to the empirical achievable rates of our codes with instantaneous encoding even for very short source length k≃16. For example, at k=16, the rate obtained from approximation (52) is 0.58 (symbols per channel use) and the empirical rate of the instantaneous SED code is 0.59 (symbols per channel rase). This means that the reliability function (17), an inherently asymptotic notion, accurately reflects the delay-reliability tradeoffs attained by the JSCC reliability function-achieving codes in the ultra-short blocklength regime. The achievable rate corresponding to the buffer-then-transmit code is limited by (37).
[0168]
is plotted as a function of source length k empirically achieved by the instantaneous SED code described above and its corresponding type-based code, as well as the rate obtained from the reliability function approximation (52). At each source length k, experiments are run using the same method as in
Streaming Over a Degenerate DMC With Zero Error
[0169] Systems and methods in accordance with several embodiments of the invention utilize a code with instantaneous encoding over a degenerate DMC (9) that achieves zero decoding error at any rate asymptotically below C/H. In several embodiments, the code allows common randomness U∈, which is a random variable that is revealed to the encoder and the decoder before the transmission. With common randomness U, the encoder f.sub.t (12) can use U to form X.sub.t, and the decoder g.sub.t (13) can use U to decide the stopping time η.sub.k and the estimate Ŝ.sub.η.sub.
k, R, ϵ
code with instantaneous encoding and common randomness if it achieves rate R (14) and error probability ϵ (15) for transmitting k symbols of a DSS.
[0170] In a number of enerbodiments, to achieve Shannon's JSCC limit C/H, a Shannon limit-achieving code is used in the first cc mnunicatiosi phase to compress the source. To transmit streaming sources, an instantaneous encoding phase that satisfies (38) is combined with a Shannon limit-achieving block encoding scheme to form a Shannon limit-achieving instantaneous encoding scheme. To achieve zero error, confirmation phases can be employed. It can be said that a k, R, ϵ.sub.k
code with instantaneous encoding and common randomness C/H achieves Shannon's JSCC limit C/H if for all
a sequence or such codes indexed by k satisfies ϵ.sub.k.fwdarw.0 as k.fwdarw.∞. In a number of embodiments, the zero-error code includes such Shannon limit-achieving codes as a building block. Note that in contrast to the discussions above focused on the exponential rate of decay of ϵ.sub.k to 0 (17) over non-degenerate DMCs, here merely having ϵ.sub.k decrease to 0 suffices.
[0171] A joint source-channel code can be employed due o the simplicity of the error analysis it affords. One such code, is a k, R, ϵ.sub.k
Shannon limit-achieving code with block encoding and common randomness because its expected decoding time to attain error probability ϵ is upper bounded with C.sub.1←C, implying that it achieves a positive error exponent that is equal to (36) with C.sub.1←C for all
Another suitable block encoding scheme is a stop-feedback code, meaning that the encoder uses channel feedback only to decide whether to stop the transmission but not to form channel inputs. If the DSS has an infinite symbol arriving rate if f=∞ (5), a buffer-then-transmit code using the block encoding scheme can achieve the Shannon limit. the same token, if the DSS has a finite syMbol arriving rate f<∞ (5) a code implementing an instantaneous encoding phase that satisfies (38) followed by any of the suitable block encoding schemes described herein for k source symbols with prior P.sub.S.sup.k.sub.Y.sup.t.sub.k achieves the Shannon limit.
[0172] In certain embodiments, the zero-error code with instantaneous encoding and common randomness for transmitting k symbols over a degenerate DMC operates as follows. In several embodiments, the code is divided into blocks. Each block can contain a communication phase and a confirmation phase. In the first block, the communication phase uses a , R, ϵ.sub.k
Shannon limit-achieving code with instantaneous encoding and common randomness. The confirmation phase can select two symbols x (9a) and x′ (9b) as the channel inputs (i.e., x′ never leads to channel output y); the encoder repeatedly transmits x if the decoder's estimate of the source sequence at the end of the communication phase is correct, and transmits x′ otherwise. If the decoder receives a y in the confirmation phase, meaning that the encoder communicated its knowledge that the decoder's estimate is correct with zero error, then it outputs its estimate, otherwise, the next block is transmitted. The
-th block,
≥2, differs from the first block in that it does not compress the source to avoid errors due to an atypical source realization and in that it uses random coding whereas the first block can employ any Shannon-limit achieving code.
[0173] In a number of embodiments, the code achieves zero error by employing confirmation phases that rely on the degenerate nature of the channel: receiving a y in the confirmation phase guarantees a correct estimate.
[0174] In certain embodiments, the code achieves all rates asymptotically below C/H because 1) the first block employs a Shannon limit-achieving code in the communication phase, 2) the length of the confirmation phase is made negligible compared to the length of the communication phase as the source length k.fwdarw.∞, meaning that the length of the first block asymptotically equals the length of its communication phase, and 3) subsequent blocks asymptotically do not incur a penalty on as we discuss next. Since the length of each block is comparable to the length of the first block, it is enough to show that the expected number of blocks T.sub.k transmitted after the first block converges to zero. The refreshing of a random codebook for all uncompressed source sequences in evei block after the first block can result in the channel output vectors in these subsequent blocks are i.i.d. and are independent of the channel outputs in the first block. Conditioned on T.sub.k>0, the i.i.d. vectors give rise to a geometric distribution of T.sub.k with failure probability converging to 0, which implies [T.sub.k].fwdarw.0 as k.fwdarw.∞.
Zero-Error Code With Instantaneous Encoding and Common Randomness
[0175] In several embodiments, a zero-error code with instantaneous encoding and common randomness is utilized for transmitting k symbols of a DSS over a degenerate DMC. For a degenerate DMC (9), its single-letter transition probability can be denoted by P.sub.Y|X:.fwdarw.
and its capacity-achieving distribution can be denoted by P*.sub.X. In the following discussion, x in (9a) is relabeled by ACK, and x′ in (9b) are relabeled by NACK. Gallager's error exponent can be denoted E.sub.G(P.sub.Y|X, R.sub.c), where R.sub.c is the channel coding rate in nats per channel. Note that the unit of the rates in many of the examples above is symbols per channel use. The rate of the code used in the communication phase of the
-th block is denoted by R(
), and the estimate formed at the end of the communication phase of the
-th block is denoted Ŝ.sup.k(
)
[0176] In several embodiments, the zero-error code is divided into blocks. Each block can contain a communication phase and a confirmation phase. In many embodiments, the first block is different from the blocks after it, since it uses a Shannon limit-achieving code in the communication phase, whereas the blocks after the first block use random coding for all source sequences in alphabet [q].sup.k. We introduce the first block and the -th block,
≥2, respectively.
[0177] The first block can be transmitted according to steps i)-ii) below. [0178] i) Communication phase. The first k symbols S.sup.k of the DSS described above is transmitted via, a Shannon limit-achieving co de with instantaneous encoding and common randomness at rate
symbols per channel use. At the end of the comunnlication phase, the decoder yields an estimate Ŝ.sup.k(l) of the source S.sup.k using the channel outputs that it has received in this phase. [0179] ii) Confirmation phase. The encoder knows Ŝ.sup.k(l) since it knows the channel outputs through the noiseless feedback. The encoder repeatedly transmits ACK if S.sup.k=Ŝ.sup.k(l), and transmits NACK if S.sup.k≠Ŝ.sup.k(l) for n.sub.k channel uses. We pick n.sub.k as
n.sub.k=δk, (53)
where δ∈(0,1) can be made arbitrarily small. At the end of the confirmation phase, if the decoder receives a y, then it terminates the transmission and output Ŝ.sub.η.sub.
[0180] The -th block,
≥2, is transmitted according: to steps iii)-iv) below. [0181] iii) Communication phase. For every sequence in the alphabet [q].sup.k of S.sup.k, the encoder generates a codeword via random coding according to the capacny-achieving distribution P*.sub.X at rate
symbols per channel use. At the end of the communication phase, the maximum likelihood (ML) decoder yields an estimate Ŝ.sup.k() to of the source symbols S.sup.k using the channel outputs that it has received in this phase. [0182] iv) Confirmation phase. The encoder, the decoder, and the stopping rule are the same as those in the first block with Ŝ.sup.k(l)←Ŝ.sup.k(
).
[0183] The random codebook is refreshed in every retransmitted block and is known by, the decoder. This gives rise to the following observations:
[0184] 1) The codewords transmitted in the communication phases of the =1, 2, . . . blocks are independent from each other;
2) As a result of the channel outputs of the =1, 2, . . . blocks are independent from each other;
3) The codewords transmitted in the communication phase of the =2, 3, . . . blocks are i.i.d. random vectors. (The codeword in the first plock is excluded since the first block need not use random coding in the communication phase);
4) As a result of 3), the channel outputs of the =2, 3, . . . blocks are i.i.d. random vectors.
[0185] While specific zero-error codes are described above for use with instantaneous encoding and common randomness, any of a variety of zero-error codes can be utilized to perform instantaneous encoding as appropriate to the requirements of specific applications in accordance with various embodiments of the invention.
[0186] Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention can be practiced otherwise than specifically described including using any of a variety of different encoders, decoders and streaming (and non-streaming) sources without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.