Transceiver decoding method and system based on protograph differential chaos shift keying
12452113 ยท 2025-10-21
Assignee
Inventors
- Yi Fang (Guangdong, CN)
- Yuyi He (Guangdong, CN)
- Liang Lv (Guangdong, CN)
- Liuguo Yin (Guangdong, CN)
- Pingping Chen (Guangdong, CN)
Cpc classification
Y02D30/70
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
International classification
Abstract
Disclosed are a transceiver decoding method and system based on protograph differential chaos shift keying (DCSK), the method including acquiring an information bit sequence, performing a coded modulation on the information bit sequence via a protograph DCSK transmitter, and outputting a plurality of modulated symbols; inputting each modulated symbol into a wireless channel for channel interference, and outputting a received symbol corresponding to each modulated symbol; using a target extrinsic information-aided network to determine a target a-posteriori probability vector corresponding to each received symbol; inputting each target a-posteriori probability vector into an a-priori calculation decoder for decoding, and outputting an initial decoded bit sequence; and determining a target decoded bit sequence according to the initial decoded bit sequence and a preset check matrix. The technical problem of lower decoding accuracy caused by the existing transceiver decoding methods designed for a DCSK system is solved.
Claims
1. A transceiver decoding method based on protograph differential chaos shift keying, a transceiver comprising a protograph differential chaos shift keying transmitter, a wireless channel, a target extrinsic information-aided network and an a-priori calculation decoder, wherein the method comprises: acquiring an information bit sequence, performing a coded modulation on the information bit sequence via the protograph differential chaos shift keying transmitter, and outputting a plurality of modulated symbols; inputting each modulated symbol into the wireless channel for channel interference, and outputting a received symbol corresponding to each modulated symbol; using the target extrinsic information-aided network to determine a target a-posteriori probability vector corresponding to each received symbol according to each received symbol and a preset initial a-posteriori probability vector corresponding to each received symbol; inputting each target a-posteriori probability vector into the a-priori calculation decoder for decoding, and outputting an initial decoded bit sequence; and determining a target decoded bit sequence according to the initial decoded bit sequence and a preset check matrix; and the target extrinsic information-aided network comprises a convolutional neural network-long short-term memory block, a fully connected block, and a decision block; and the step of using the target extrinsic information-aided network to determine a target a-posteriori probability vector corresponding to each received symbol according to each received symbol and a preset initial a-posteriori probability vector corresponding to each received symbol comprises: using the convolutional neural network-long short-term memory block to perform one-dimensional feature extraction on each received symbol, and generating a first feature vector corresponding to each received symbol; using the fully connected block to perform vector feature extraction on the preset initial a-posteriori probability vector corresponding to each received symbol, and determining a second feature vector corresponding to each preset initial a-posteriori probability vector; and inputting each first feature vector and each second feature vector into the decision block for feature concatenating separately, and outputting the target a-posteriori probability vector corresponding to each received symbol.
2. The transceiver decoding method based on protograph differential chaos shift keying according to claim 1, wherein the protograph differential chaos shift keying transmitter comprises a protograph encoder and an M-ary differential chaos shift keying modulator; and the step of performing a coded modulation on the information bit sequence via the protograph differential chaos shift keying transmitter, and outputting a plurality of modulated symbols comprises: encoding the information bit sequence using the protograph encoder to generate an encoded bit sequence; dividing the encoded bit sequence according to a preset number of encoded bits, and outputting a plurality of groups of encoded bits; and inputting each encoded bit into the M-ary differential chaos shift keying modulator for modulation to determine the plurality of modulated symbols.
3. The transceiver decoding method based on protograph differential chaos shift keying according to claim 1, wherein the convolutional neural network-long short-term memory block comprises a one-dimensional convolutional layer, a batch normalized layer, a first activation function layer and a long short-term memory layer; and the step of using the convolutional neural network-long short-term memory block to perform one-dimensional feature extraction on each received symbol, and generating a first feature vector corresponding to each received symbol comprises: performing a one-dimensional convolutional operation on each received symbol via the one-dimensional convolutional layer to generate a first convolutional feature corresponding to each received symbol; normalizing each first convolutional feature using the batch normalized layer to determine a first normalized feature corresponding to each first convolutional feature; inputting each first normalized feature into the first activation function layer for nonlinear mapping to generate a nonlinear feature corresponding to each first normalized feature; performing the one-dimensional convolutional operation on each nonlinear feature via the one-dimensional convolutional layer to generate a second convolutional feature corresponding to each nonlinear feature; normalizing each second convolutional feature using the batch normalized layer to determine a second normalized feature corresponding to each second convolutional feature; inputting each second normalized feature into the first activation function layer for nonlinear mapping to generate a two-dimensional feature matrix corresponding to each second normalized feature; sequentially inputting column data of each two-dimensional feature matrix into the long short-term memory layer for feature extraction according to a column sequence number of each two-dimensional feature matrix to generate a plurality of initial feature vectors corresponding to each two-dimensional feature matrix; and screening the plurality of initial feature vectors corresponding to each two-dimensional feature matrix to determine a first feature vector corresponding to each two-dimensional feature matrix.
4. The transceiver decoding method based on protograph differential chaos shift keying according to claim 1, wherein the fully connected block comprises a fully connected layer, a batch normalized layer and a first activation function layer; and the step of using the fully connected block to perform vector feature extraction on the preset initial a-posteriori probability vector corresponding to each received symbol, and determining a second feature vector corresponding to each preset initial a-posteriori probability vector comprises: performing feature mapping on the preset initial a-posteriori probability vector corresponding to each received symbol via the fully connected layer to generate a first a-posteriori probability mapping feature corresponding to each preset initial a-posteriori probability vector; performing normalization and nonlinear mapping on each first a-posteriori probability mapping feature using the batch normalized layer and the first activation function layer, respectively, to determine a first probability nonlinear feature corresponding to each first a-posteriori probability mapping feature; performing feature mapping on each first probability nonlinear feature via the fully connected layer to generate a second a-posteriori probability mapping feature corresponding to each first probability nonlinear feature; performing normalization and nonlinear mapping on each second a-posteriori probability mapping feature using the batch normalized layer and the first activation function layer, respectively, to determine a second probability nonlinear feature corresponding to each second a-posteriori probability mapping feature; performing feature mapping on each second probability nonlinear feature via the fully connected layer to generate a third a-posteriori probability mapping feature corresponding to each second probability nonlinear feature; and performing normalization and nonlinear mapping on each third a-posteriori probability mapping feature using the batch normalized layer and the first activation function layer, respectively, to determine a second feature vector corresponding to each third a-posteriori probability mapping feature.
5. The transceiver decoding method based on protograph differential chaos shift keying according to claim 1, wherein the decision block comprises a fully connected layer, a batch normalized layer, a first activation function layer and a second activation function layer; and the step of inputting each first feature vector and each second feature vector into the decision block for feature concatenating separately, and outputting the target a-posteriori probability vector corresponding to each received symbol comprises: separately performing one-dimensional concatenation on each first feature vector and each second feature vector to generate a concatenated feature vector corresponding to each received symbol; performing feature mapping on each concatenated feature vector via the fully connected layer to determine a first concatenated mapping feature corresponding to each concatenated feature vector; performing normalization and nonlinear mapping on each first concatenated mapping feature using the batch normalized layer and the first activation function layer, respectively, to generate a first concatenated nonlinear feature corresponding to each first concatenated mapping feature; performing feature mapping on each first concatenated nonlinear feature via the fully connected layer to determine a second concatenated mapping feature corresponding to each first concatenated nonlinear feature; and performing nonlinear mapping on each second concatenated mapping feature using the second activation function layer to generate a target a-posteriori probability vector corresponding to each second concatenated mapping feature.
6. The transceiver decoding method based on protograph differential chaos shift keying according to claim 1, wherein the a-priori calculation decoder comprises an a-priori log-likelihood ratio calculation block and a protograph decoder; and the step of inputting each target a-posteriori probability vector into the a-priori calculation decoder for decoding, and outputting an initial decoded bit sequence comprises: performing a-priori conversion on each target a-posteriori probability vector via the a-priori log-likelihood ratio calculation block to determine an a-priori log-likelihood ratio vector corresponding to each target a-posteriori probability vector; inputting each a-priori log-likelihood ratio vector into the protograph decoder for protograph decoding to generate an a-posteriori log-likelihood ratio vector corresponding to each a-priori log-likelihood ratio vector; and performing hard decision on each a-posteriori log-likelihood ratio vector to generate an initial decoded bit sequence.
7. The transceiver decoding method based on protograph differential chaos shift keying according to claim 6, wherein the step of determining a target decoded bit sequence according to the initial decoded bit sequence and a preset check matrix comprises: performing a multiplication operation on the initial decoded bit sequence and the preset check matrix to determine a sequence-matrix product value, and counting the number of iterations in real-time; taking the initial decoded bit sequence as a target decoded bit sequence in a case that the sequence-matrix product value is zero or the number of iterations reaches a preset number of global iterations; separately performing a difference operation on the a-priori log-likelihood ratio vector corresponding to each target a-posteriori probability vector and the a-posteriori log-likelihood ratio vectors in a case that the sequence-matrix product value is not zero and the number of iterations does not reach the preset number of global iterations, and outputting an extrinsic information vector corresponding to each target a-posteriori probability vector; using elements of each extrinsic information vector to update elements of the preset initial a-posteriori probability vector corresponding to each received symbol, and determining a new preset initial a-posteriori probability vector corresponding to each received symbol; determining the sequence-matrix product value according to each received symbol and each new preset initial a-posteriori probability vector until the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations; and taking the initial decoded bit sequence determined in a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations as the target decoded bit sequence.
8. The transceiver decoding method based on protograph differential chaos shift keying according to claim 1, wherein a training process of the target extrinsic information-aided network comprises: acquiring an information bit sequence to be trained, performing a coded modulation on the information bit sequence to be trained via the protograph differential chaos shift keying transmitter, and outputting a plurality of modulated symbols to be trained; inputting each modulated symbol to be trained into the wireless channel for channel interference, and outputting a received symbol to be trained corresponding to each modulated symbol to be trained; calculating an initial a-posteriori probability vector to be trained corresponding to each received symbol to be trained according to each received symbol to be trained using a preset log-likelihood ratio calculation method; and on the basis of a preset cross-entropy loss function, performing network training on an initial extrinsic information-aided network using each received symbol to be trained and each initial a-posteriori probability vector to be trained, and determining a well-trained target extrinsic information-aided network.
9. A transceiver decoding system based on protograph differential chaos shift keying, a transceiver comprising a protograph differential chaos shift keying transmitter, a wireless channel, a target extrinsic information-aided network and an a-priori calculation decoder, wherein the system comprises: an acquisition block, configured to acquire an information bit sequence, perform a coded modulation on the information bit sequence via the protograph differential chaos shift keying transmitter, and output a plurality of modulated symbols; an interference block, configured to input each modulated symbol into the wireless channel for channel interference, and output a received symbol corresponding to each modulated symbol; a network output block, configured to use the target extrinsic information-aided network to determine a target a-posteriori probability vector corresponding to each received symbol according to each received symbol and a preset initial a-posteriori probability vector corresponding to each received symbol; a decoding block, configured to input each target a-posteriori probability vector into the a-priori calculation decoder for decoding, and output an initial decoded bit sequence; and a determination block, configured to determine a target decoded bit sequence according to the initial decoded bit sequence and a preset check matrix; and the target extrinsic information-aided network comprises a convolutional neural network-long short-term memory block, a fully connected block and a decision block; and the network output block comprises: a first sub-block, configured to use the convolutional neural network-long short-term memory block to perform one-dimensional feature extraction on each received symbol, and generate a first feature vector corresponding to each received symbol; a second sub-block, configured to use the fully connected block to perform vector feature extraction on the preset initial a-posteriori probability vector corresponding to each received symbol, and determine a second feature vector corresponding to each preset initial a-posteriori probability vector; and a third sub-block, configured to input each first feature vector and each second feature vector into the decision block for feature concatenating separately, and output the target a-posteriori probability vector corresponding to each received symbol.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) To state the technical solutions of the embodiments in the disclosure or the prior art clearer, the attached drawings needed in the description of embodiments or prior art are briefly introduced below. Obviously, the drawings described below are only some embodiments of the disclosure. For those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) Embodiments of the disclosure provide a transceiver decoding method and system based on protograph DCSK, to solve the technical problem of lower decoding accuracy caused by the existing transceiver decoding methods designed for a DCSK system.
(11) To make the inventive purpose, features and advantages of the disclosure more obvious and understandable, the technical solutions of the embodiments in the disclosure will be described clearly and completely by reference to the attached drawings of the embodiments in the disclosure below. Obviously, the embodiments described are only some, rather than all embodiments of the disclosure. On the basis of the embodiments of the disclosure, all other embodiments obtained by those ordinary skilled in the art without creative efforts fall within the scope of protection of the disclosure.
(12) Referring to
(13) The disclosure provides a transceiver decoding method based on protograph DCSK. A transceiver includes a protograph DCSK transmitter, a wireless channel, a target extrinsic information-aided network and an a-priori calculation decoder. The method includes the following steps.
(14) Step 101, an information bit sequence is acquired, a coded modulation is performed on the information bit sequence via the protograph DCSK transmitter, and a plurality of modulated M-ary DCSK symbols are outputted.
(15) It is to be noted that, referring to
(16) Specifically, the given information bit sequence is acquired, the information bit sequence is encoded using the protograph encoder to generate an encoded bit sequence; the encoded bit sequence is divided according to the preset number of encoded bits, and a plurality of groups of encoded bits are outputted; and each encoded bit is inputted into the M-ary DCSK modulator for modulation to determine a plurality of modulated M-ary DCSK symbols. The preset number of encoded bits refers to the number of encoded bits carried by a single M-ary DCSK symbol, and it can be set as needed, and the disclosure is not limited to this.
(17) In the embodiment, the information bit sequence is acquired, the coded modulation is performed on the information bit sequence via the protograph DCSK transmitter, and a plurality of modulated M-ary DCSK symbols are outputted.
(18) Step 102, each modulated M-ary DCSK symbol is inputted into the wireless channel for channel interference, and a received M-ary DCSK symbol corresponding to each modulated M-ary DCSK symbol is outputted.
(19) It is to be noted that the wireless channel serves as the path between the protograph DCSK transmitter and the extrinsic information receiver. After the coded modulation by the protograph DCSK transmitter, the modulated M-ary DCSK symbol can be represented as {tilde over (S)}.sub.u=[w.sub.u,1x, w.sub.u,2x, . . . , w.sub.u,Mx], which then undergoes channel interference through the wireless channel, and the received M-ary DCSK symbol corresponding to each modulated M-ary DCSK symbol is outputted. Su is the modulated M-ary DCSK symbol, w.sub.u,1 is the first element in the u.sub.th row of a Walsh code, w.sub.u,2 is the second element in the u.sub.th row of the Walsh code, w.sub.u,M is the M.sub.th element in the u.sub.th row of the Walsh code, x is a chaotic sequence, and the received M-ary DCSK symbol can be expressed as:
r=h{tilde over (S)}.sub.u+; where r is the received M-ary DCSK symbol; h is a channel response factor; {tilde over (S)}.sub.u is the modulated M-ary DCSK symbol; and is AWGN vector having the same size as the modulated M-ary DCSK symbol.
(20) In the embodiment, each modulated M-ary DCSK symbol is inputted into the wireless channel for channel interference, and the received M-ary DCSK symbol corresponding to each modulated M-ary DCSK symbol is outputted.
(21) Step 103, the target extrinsic information-aided network is used to determine a target a-posteriori probability vector corresponding to each received M-ary DCSK symbol according to each received M-ary DCSK symbol and a preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol.
(22) The target extrinsic information-aided network includes a CNN-LSTM block, an FC block and a decision block.
(23) Specifically, the CNN-LSTM block is used to perform one-dimensional feature extraction on each received M-ary DCSK symbol, and a first feature vector corresponding to each received M-ary DCSK symbol is generated; the FC block is used to perform vector feature extraction on the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol, and a second feature vector corresponding to each preset initial a-posteriori probability vector is determined; and each first feature vector and each second feature vector are inputted into the decision block for feature concatenating separately, and the target a-posteriori probability vector corresponding to each received M-ary DCSK symbol is outputted. The preset initial a-posteriori probability vector is represented as P({tilde over (S)}|r.sub.i).sup.(0)={1/M, 1/M, . . . , 1/M}, where P({tilde over (S)}|r.sub.i).sup.(0) is a preset initial a-posteriori probability vector in a case that the number of global iterations g=0, is a set of the modulated bits, r.sub.i is the i.sub.th received M-ary DCSK symbol, and M is a preset modulated order.
(24) In the embodiment, the target extrinsic information-aided network is used to determine the target a-posteriori probability vector corresponding to each received M-ary DCSK symbol according to each received M-ary DCSK symbol and the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol.
(25) Step 104, each target a-posteriori probability vector is inputted into the a-priori calculation decoder for decoding, and an initial decoded bit sequence is outputted.
(26) The a-priori calculation decoder includes an a-priori LLR calculation block and a protograph decoder.
(27) It is to be noted that, the LLR calculation block is used to perform a-priori conversion on each target a-posteriori probability vector, to determine the a-priori LLR vectors corresponding to each target a-posteriori probability vector. Specifically, for each received M-ary DCSK symbol, there is a corresponding modulated bit {c.sup.1, c.sup.2, . . . , c.sup.m}. The a-priori LLR calculation block is used to convert the elements in the target a-posteriori probability vector, to obtain the elements of the a-priori LLR vector corresponding to each modulated bit in each received M-ary DCSK symbol. Then, according to all the elements of the a-priori LLR vector, the a-priori LLR vector corresponding to each received M-ary DCSK symbol is obtained. The specific process for processing the elements of the a-priori LLR vector is as follows:
(28)
(29)
is an element of an a-priori LLR vector corresponding to the n.sub.th modulated bit in a case that the number of global iterations is g; c.sup.n is the n.sub.th modulated bit;
(30)
is an element in a target a-posteriori probability vector in a case that the number of global iterations is g, and this element represents an a-posteriori probability of the modulated M-ary DCSK symbol {tilde over (S)}.sub.u outputted by the target extrinsic information-aided network; {tilde over (S)}.sub.u is the modulated M-ary DCSK symbol;
(31)
is a subset with the n.sub.th modulated bit being 0 in a modulated M-ary DCSK symbol set; and
(32)
is a subset with the n.sub.th modulated bit being 1 in the modulated M-ary DCSK symbol set.
(33) Furthermore, the decoding operation is performed using the a-priori LLR vectors corresponding to all the received M-ary DCSK symbols to output the initial decoded bit sequence. Specifically, each a-priori LLR vector is inputted into the protograph decoder for protograph decoding, and the a-posteriori LLR vector corresponding to each a-priori LLR vector is generated; and hard decision is applied to each a-posteriori LLR vector to generate the initial decoded bit sequence.
(34) In the embodiment, each target a-posteriori probability vector is inputted into the a-priori calculation decoder for decoding, and the initial decoded bit sequence is outputted.
(35) Step 105, a target decoded bit sequence is determined according to the initial decoded bit sequence and a preset check matrix.
(36) It is to be noted that a multiplication operation is performed on the initial decoded bit sequence and the preset check matrix to determine the sequence-matrix product value, and the number of iterations is counted in real-time. In a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations, the initial decoded bit sequence is taken as the target decoded bit sequence; and in a case that the sequence-matrix product value is not zero and the number of iterations does not reach the preset number of global iterations, a probability calculation block is used to perform a difference operation on the a-priori LLR vector corresponding to each target a-posteriori probability vector and the a-posteriori LLR vector, respectively, and the extrinsic information vector corresponding to each target a-posteriori probability vector is outputted. The specific calculation process for the extrinsic information vectors is as follows:
(37)
(38)
is an extrinsic information vector in a case that the number of global iterations is g;
(39)
is an a-posteriori LLR vector in a case that the number of global iterations is g; and
(40)
is an a-priori LLR vector in a case that the number of global iterations is g.
(41) Furthermore, the elements of the extrinsic information vector corresponding to each received M-ary DCSK symbol are utilized to update the elements of the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol. According to all the updated elements of the preset initial a-posteriori probability vector, new preset initial a-posteriori probability vectors are obtained, that is, the elements of each extrinsic information vector are used to update the elements of the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol, thereby determining the new preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol. The specific process for updating the new preset initial a-posteriori probability vector is as follows:
(42)
(43)
and P(c.sup.n=1|r.sub.i).sup.(g)=1P(c.sup.n=0|r.sub.i).sup.(g);
(44)
is an element of a new preset initial a-posteriori probability vector in a case that the number of global iterations is g; m is the preset number of the modulated bits; c.sup.n is the n.sub.th modulated bit; P(c.sup.n=b|r.sub.i).sup.(g) is an a-posteriori probability that the n.sub.th modulated bit corresponding to the received M-ary DCSK symbol is b in a case that the number of global iterations is g; P(c.sup.n=0|r.sub.i) is an a-posteriori probability that the n.sub.th modulated bit corresponding to the received M-ary DCSK symbol is 0 in a case that the number of global iterations is g;
(45)
is an element of an extrinsic information vector corresponding to the n.sub.th modulated bit in a case that the number of global iterations is g1; P(c.sup.n=1|r.sub.i).sup.(g) is an a-posteriori probability that the n.sub.th modulated bit corresponding to the received M-ary DCSK symbol is 1 in a case that the number of global iterations is g; and b is a constant, b {0,1}.
(46) Furthermore, according to each received M-ary DCSK symbol and each new preset initial a-posteriori probability vector, the sequence-matrix product value is determined, that is, the target extrinsic information-aided network is used to determine the new target a-posteriori probability vector corresponding to each received M-ary DCSK symbol according to each received M-ary DCSK symbol and each new preset initial a-posteriori probability vector. Each new target a-posteriori probability vector is inputted into the a-priori calculation decoder for decoding, and the new initial decoded bit sequence is outputted; and a multiplication operation is performed on the new initial decoded bit sequence and the preset check matrix to determine the sequence-matrix product value until the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations; and the initial decoded bit sequence determined in a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations is taken as the target decoded bit sequence. The preset number of global iterations can be set as needed, and the disclosure is not limited to this.
(47) In the embodiment, the target decoded bit sequence is determined according to the initial decoded bit sequence and the preset check matrix.
(48) In an embodiment of the disclosure, a transceiver decoding method based on protograph DCSK is provided. First, an information bit sequence is acquired, a coded modulation is performed on the information bit sequence via a protograph DCSK transmitter, and a plurality of modulated M-ary DCSK symbols are outputted; then, each modulated M-ary DCSK symbol is inputted into a wireless channel for channel interference, and a received M-ary DCSK symbol corresponding to each modulated M-ary DCSK symbol is outputted; a target extrinsic information-aided network is used to determine a target a-posteriori probability vector corresponding to each received M-ary DCSK symbol according to each received M-ary DCSK symbol and a preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol; each target a-posteriori probability vector is inputted into an a-priori calculation decoder for decoding, and an initial decoded bit sequence is outputted; and finally, the target decoded bit sequence is determined according to the initial decoded bit sequence and the preset check matrix. According to the above-mentioned solutions, in the process of generating the target decoded bit sequence on the basis of the initial decoded bit sequence determined by the target extrinsic information-aided network and the preset check matrix, both the received M-ary DCSK symbol and the a-posteriori probability vector corresponding thereto are considered, thereby improving the performance of a protograph DCSK coded modulation system, and further improving the decoding accuracy.
(49) Referring to
(50) The disclosure provides a transceiver decoding method based on protograph DCSK. A transceiver includes a protograph DCSK transmitter, a wireless channel, a target extrinsic information-aided network and an a-priori calculation decoder. The method includes the following steps.
(51) Step 301, an information bit sequence is acquired, a coded modulation is performed on the information bit sequence via the protograph DCSK transmitter, and a plurality of modulated M-ary DCSK symbols are outputted.
(52) The protograph DCSK transmitter includes a protograph encoder and an M-ary DCSK modulator.
(53) Furthermore, step 301 can include the following sub-steps. S11, the information bit sequence is encoded using the protograph encoder to generate an encoded bit sequence; S12, the encoded bit sequence is divided according to a preset number of encoded bits, and a plurality of groups of encoded bits are outputted; and S13, each encoded bit is inputted into the M-ary DCSK modulator for modulation to determine the plurality of modulated M-ary DCSK symbols.
(54) It is to be noted that, a given information bit sequence b={b.sub.1, b.sub.2, . . . , b.sub.k} is acquired and outputted into the protograph encoder to obtain an encoded bit sequence c={c.sub.1, c.sub.2, . . . , c.sub.k}, which is sent to the M-ary DCSK modulator, and the modulated order is defined, that is, the preset modulated order is M, and the preset number of encoded bits m=log .sub.2 M is obtained, which represents the number of the encoded bits carried by a single M-ary DCSK symbol. In the M-ary DCSK modulator, a chaotic signal generator generates a chaotic sequence x={x.sub.1, x.sub.2, . . . , x.sub.} with a length of , and a Walsh code generator produces an M-order Walsh code, i.e., Walsh code W, according to M. According to the natural mapping rule, the code data W.sub.u=[w.sub.u,1, w.sub.u,2, . . . , w.sub.u,M] in the u.sub.th row of W is selected to participate in the modulation. The modulated M-ary DCSK symbol after modulation is denoted as {tilde over (S)}.sub.u=[w.sub.u,1x, w.sub.u,2x, . . . , w.sub.u,Mx]. The set of the modulated M-ary DCSK symbols composed of modulated M-ary DCSK symbols is represented as {tilde over (S)}={{tilde over (S)}.sub.1, {tilde over (S)}.sub.2, . . . , {tilde over (S)}.sub.M}. b.sub.1 is the first information bit, b.sub.2 is the second information bit, and b.sub.k is the k.sub.th information bit. c.sub.1 is the first encoded bit, c.sub.2 is the second encoded bit, and c.sub.n is the n.sub.th encoded bit. x.sub.1 is the first element in the chaotic sequence, x.sub.2 is the second element in the chaotic sequence, and x.sub. is the .sub.th element in the chaotic sequence. {tilde over (S)}.sub.u is the modulated M-ary DCSK symbol, w.sub.u,1 is the first element in the u.sub.th row of the Walsh code, w.sub.u,2 is the second element in the u.sub.th row of the Walsh code, and w.sub.u,M is the M.sub.th element in the u.sub.th row of the Walsh code. {tilde over (S)}.sub.1 is the first modulated M-ary DCSK symbol of the symbol set, {tilde over (S)}.sub.2 is the second modulated M-ary DCSK symbol of the symbol set, and {tilde over (S)}.sub.M is the M.sub.th modulated M-ary DCSK symbol of the symbol set.
(55) In the embodiment, the information bit sequence is acquired, the coded modulation is performed on the information bit sequence via the protograph DCSK transmitter, and a plurality of modulated M-ary DCSK symbols are outputted.
(56) Step 302, each modulated M-ary DCSK symbol is inputted into the wireless channel for channel interference, and a received M-ary DCSK symbol corresponding to each modulated M-ary DCSK symbol is outputted.
(57) In the embodiment, each modulated M-ary DCSK symbol is inputted into the wireless channel for channel interference, and the received M-ary DCSK symbol corresponding to each modulated M-ary DCSK symbol is outputted.
(58) Step 303, the target extrinsic information-aided network is used to determine a target a-posteriori probability vector corresponding to each received M-ary DCSK symbol according to each received M-ary DCSK symbol and a preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol.
(59) The target extrinsic information-aided network includes a CNN-LSTM block, an FC block and a decision block.
(60) It is to be noted that the target extrinsic information-aided network provided in the disclosure includes a CNN-LSTM block, an FC block, and a decision block. The CNN-LSTM block includes a buffer (Buffer), a Conv1D layer, a BN layer, a first activation function layer and an LSTM layer, and the first activation function layer employs a rectified linear unit (ReLU) activation function. The FC block includes an FCL, a BN layer and a first activation function layer, and the first activation function layer employs a ReLU activation function. The decision block includes an FCL, a BN layer, a first activation function layer and a second activation function layer, the first activation function layer employs a ReLU activation function, and the second activation function layer employs a Softmax activation function.
(61) Specifically, referring to
(62) TABLE-US-00001 TABLE 1 Parameters of each layer of the target extrinsic information-aided network Output Activation Other Layer name dimension function parameters Input layer 1 M None None Input layer 2 M None None 1st ConvlD 16 ReLU Kernel convolution size: 3 Padding size: 1 Step size: 1 2nd ConvlD 32 ReLU Ditto LSTM 32 None None 1st FCL 32 ReLU None 2nd FCL 64 ReLU None 3rd FCL 200 ReLU None 4th FCL 32 ReLU None 5th FCL M Softmax None
(63) Furthermore, without changing the transmitter, the energy detection-based demodulator in the traditional receiver is replaced by the target extrinsic information-aided network proposed in the disclosure. The target extrinsic information-aided network proposed in the disclosure includes two inputs. The first input is the i.sub.th received M-ary DCSK symbol r.sub.i, and the second input is the preset initial a-posteriori probability vector
(64)
corresponding to the received M-ary DCSK symbol in a case that the number of global iterations is g. During the g.sub.th global iteration (the number of global iterations), r.sub.i and P({tilde over (S)}|r.sub.i).sup.(g) are sent to the target extrinsic information-aided network. In a case that g=0, the preset initial a-posteriori probability vector is denoted as P({tilde over (S)}|r.sub.i).sup.(0)={1/M, 1/M, . . . , 1/M}, where
(65)
is the first element in the preset initial a-posteriori probability vector in a case that the number of global iterations is g, and this element represents an a-posteriori probability of the first modulated M-ary DCSK symbol {tilde over (S)}.sub.1;
(66)
is the second element in the preset initial a-posteriori probability vector in a case that the number of global iterations is g, and this element represents an a-posteriori probability of the second modulated M-ary DCSK symbol {tilde over (S)}.sub.2;
(67)
is the M.sub.th element in the preset initial a-posteriori probability vector in a case that the number of global iterations is g, and this element represents an a-posteriori probability of the M.sub.th modulated M-ary DCSK symbol {tilde over (S)}.sub.M; the number of global iterations g=0,1, . . . , I, and I is the maximum number of the global iterations, i.e., the preset number of global iterations.
(68) Alternatively, a training process of the target extrinsic information-aided network can include the following steps.
(69) An information bit sequence to be trained is acquired, a coded modulation is performed on the information bit sequence to be trained via the protograph DCSK transmitter, and a plurality of modulated M-ary DCSK symbols to be trained are outputted.
(70) Each modulated M-ary DCSK symbol to be trained is inputted into the wireless channel for channel interference, and a received M-ary DCSK symbol to be trained corresponding to each modulated M-ary DCSK symbol to be trained is outputted.
(71) An initial a-posteriori probability vector to be trained corresponding to each received M-ary DCSK symbol to be trained is calculated according to each received M-ary DCSK symbol to be trained using a preset LLR calculation method.
(72) On the basis of a preset cross-entropy loss function, network training is performed on an initial extrinsic information-aided network using each received M-ary DCSK symbol to be trained and each initial a-posteriori probability vector to be trained, and a well-trained target extrinsic information-aided network is determined.
(73) It is to be noted that, the transmitter of the protograph DCSK coded modulation system is used to transmit the symbol sequence, after the symbol sequence passing through the channel, the received M-ary DCSK symbol sequence is segmented into individual symbols to be trained; the initial a-posteriori probability vector to be trained corresponding to the received M-ary DCSK symbol to be trained is subjected to the LLR calculation method in the traditional receiver, to calculate the initial a-priori LLR to be trained; and then the initial a-priori LLR to be trained is converted into the initial a-posteriori probability vector to be trained; the corresponding labels for both are index values of the sent symbols (i.e., 1, 2, . . . , M), and these labels are encoded into a label vector l.sub.i={l.sub.i,1, l.sub.i,2, . . . , l.sub.i,M} using one-hot encoding. In the vector, only the value corresponding to the label is 1, while all other values are 0. l.sub.i,1 is the first element in the label vector; l.sub.i,2 is the second element in the label vector; and l.sub.i,M is the M.sub.th element in the label vector.
(74) Furthermore, in the disclosure, training sets and validation sets are separately generated under different wireless channel conditions and signal-to-noise ratios according to the above-mentioned steps, to perform network training on the initial extrinsic information-aided network. The detailed training hyperparameters are presented in Table 2.
(75) TABLE-US-00002 TABLE 2 Training hyperparameters of initial extrinsic information-aided network Parameters Values Number of training sets 500000 Number of validation sets 100000 Initial learning rate 0.001 Learning rate strategy Cosine annealing strategy Optimizer Adam Number of training epochs 120 Number of training batches 800 per time Loss function Cross-entropy loss function Generation of signal-to-noise 7-25 dB, spacing 1 dB ratio of data Channel conditions AWGN, multipath Rayleigh fading channel
(76) Furthermore, to achieve better performance under different wireless channels, in the disclosure, individual training of data on different wireless channels is performed. A preset cross-entropy loss function is employed to measure the difference between the outputs of the initial extrinsic information-aided network and the labels. Specifically, an initial extrinsic information-aided network is used to determine a target a-posteriori probability vector to be trained corresponding to each received M-ary DCSK symbol to be trained according to each received M-ary DCSK symbol to be trained and each initial a-posteriori probability vector to be trained; the preset cross-entropy loss function is utilized to calculate the target loss value according to the target a-posteriori probability vector to be trained; the target loss value is employed to update the trainable parameters of the initial extrinsic information-aided network, to determine an intermediate extrinsic information-aided network, and the number of network update epochs is counted in real-time. In a case that the number of network update epochs reaches the preset number of training epochs, the intermediate extrinsic information-aided network is used as the well-trained target extrinsic information-aided network. The preset number of training epochs can be set according to specific requirements, and the disclosure is not limited to this. The preset cross-entropy loss function is specifically defined as:
(77)
(78)
is the k.sub.th value in the target a-posteriori probability vector to be trained outputted by the initial extrinsic information-aided network. The network parameters that yield the lowest loss value on the validation set will be utilized for online deployment.
(79) Furthermore, step 303 can include the following sub-steps.
(80) S31, the CNN-LSTM block is used to perform one-dimensional feature extraction on each received M-ary DCSK symbol, and a first feature vector corresponding to each received M-ary DCSK symbol is generated.
(81) The CNN-LSTM block includes a Conv1D layer, a BN layer, a first activation function layer and an LSTM layer.
(82) Furthermore, S31 can include the following sub-steps.
(83) S311, a one-dimensional convolutional operation is performed on each received M-ary DCSK symbol via the Conv1D layer to generate a first convolutional feature corresponding to each received M-ary DCSK symbol.
(84) S312, each first convolutional feature is normalized using the BN layer to determine a first normalized feature corresponding to each first convolutional feature.
(85) S313, each first normalized feature is inputted into the first activation function layer for nonlinear mapping to generate a nonlinear feature corresponding to each first normalized feature.
(86) S314, the one-dimensional convolutional operation is performed on each nonlinear feature via the Conv1D layer to generate a second convolutional feature corresponding to each nonlinear feature.
(87) S315, each second convolutional feature is normalized using the BN layer to determine a second normalized feature corresponding to each second convolutional feature.
(88) S316, each second normalized feature is inputted into the first activation function layer for nonlinear mapping to generate a two-dimensional feature matrix corresponding to each second normalized feature.
(89) S317, column data of each two-dimensional feature matrix are sequentially inputted into the LSTM layer for feature extraction according to a column sequence number of each two-dimensional feature matrix to generate a plurality of initial feature vectors corresponding to each two-dimensional feature matrix.
(90) S318, the plurality of initial feature vectors corresponding to each two-dimensional feature matrix are screened to determine a first feature vector corresponding to each two-dimensional feature matrix.
(91) It is to be noted that, the received M-ary DCSK symbol r.sub.i received by the CNN-LSTM block is first transformed from 1M to M through size transformation. M and are taken as a channel axis and a time axis for the one-dimensional convolutional operation, respectively. There are
(92)
convolutional kernels with a size of 3 in the j.sub.th layer of the Conv1D layer, and they are outputted as two-dimensional feature matrices
(93)
after undergoing two layers of Conv1D layers. Each layer of the Conv1D layer uses ReLU as the activation function. Additionally, a BN layer is added to accelerate training convergence and prevent gradient vanishing. The time axis is represented by the column sequence number of the matrix, and the channel axis is represented by the row sequence number of the matrix. The obtained two-dimensional feature matrices
(94)
are inputted into the LSTM layer on a moment-by-moment basis along the time axis, that is, the column data of the two-dimensional feature matrices are sequentially inputted into the LSTM layer for feature extraction according to the column sequence number. For instance, the input at the k.sub.th moment, which corresponds to the column data with column sequence number k, can be denoted as
(95)
is the value at the k.sub.th moment in the first channel of
(96)
is the value at the k.sub.th moment in the second channel of
(97)
is the value at the k.sub.th moment in the
(98)
channel of
(99)
At each moment, the LSTM layer outputs a vector, which is stored in a buffer. In the buffer, it is necessary to filter a plurality of initial feature vectors corresponding to each two-dimensional feature matrix to determine the first feature vectors corresponding to each two-dimensional feature matrix, and then the first feature vectors are sent to the decision block. Specifically, during the filtering process, only the vector Ve corresponding to the last time slot is selected as the first feature vector. For example, if one two-dimensional feature matrix has ten columns of column data, after obtaining the vectors corresponding to the ten columns of column data, only the vector corresponding to the tenth column is taken as the first feature vector for output.
(100) S32, the FC block is used to perform vector feature extraction on the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol, and a second feature vector corresponding to each preset initial a-posteriori probability vector is determined.
(101) The FC block includes an FCL, a BN layer and a first activation function layer.
(102) Furthermore, S32 can include the following sub-steps.
(103) S321, feature mapping is performed on the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol via the FCL to generate a first a-posteriori probability mapping feature corresponding to each preset initial a-posteriori probability vector.
(104) S322, normalization and nonlinear mapping are performed on each first a-posteriori probability mapping feature using the BN layer and the first activation function layer, respectively, to determine a first probability nonlinear feature corresponding to each first a-posteriori probability mapping feature.
(105) S323, feature mapping is performed on each first probability nonlinear feature via the FCL to generate a second a-posteriori probability mapping feature corresponding to each first probability nonlinear feature.
(106) S324, normalization and nonlinear mapping are performed on each second a-posteriori probability mapping feature using the BN layer and the first activation function layer, respectively, to determine a second probability nonlinear feature corresponding to each second a-posteriori probability mapping feature.
(107) S325, feature mapping is performed on each second probability nonlinear feature via the FCL to generate a third a-posteriori probability mapping feature corresponding to each second probability nonlinear feature.
(108) S326, normalization and nonlinear mapping are performed on each third a-posteriori probability mapping feature using the BN layer and the first activation function layer, respectively, to determine a second feature vector corresponding to each third a-posteriori probability mapping feature.
(109) It is to be noted that, in the FC block, three layers of FCLs are utilized to extract the features of the preset initial a-posteriori probability vector in the disclosure. ReLU is employed as the activation function among FCLs, and a BN layer is used to accelerate training convergence and prevent gradient vanishing. The output from the third layer of FCL is utilized as the second feature vector f.sub.3 and is sent to the decision block.
(110) S33, each first feature vector and each second feature vector are inputted into the decision block for feature concatenating separately, and the target a-posteriori probability vector corresponding to each received M-ary DCSK symbol is outputted.
(111) The decision block includes an FCL, a BN layer, a first activation function layer and a second activation function layer.
(112) Furthermore, S33 can include the following sub-steps.
(113) S331, one-dimensional concatenation is separately performed on each first feature vector and each second feature vector to generate a concatenated feature vector corresponding to each received M-ary DCSK symbol.
(114) S332, feature mapping is performed on each concatenated feature vector via the FCL to determine a first concatenated mapping feature corresponding to each concatenated feature vector.
(115) S333, normalization and nonlinear mapping are performed on each first concatenated mapping feature using the BN layer and the first activation function layer, respectively, to generate a first concatenated nonlinear feature corresponding to each first concatenated mapping feature.
(116) S334, feature mapping is performed on each first concatenated nonlinear feature via the FCL to determine a second concatenated mapping feature corresponding to each first concatenated nonlinear feature.
(117) S335, nonlinear mapping is performed on each second concatenated mapping feature using the second activation function layer to generate a target a-posteriori probability vector corresponding to each second concatenated mapping feature.
(118) It is to be noted that, in the decision block, the disclosure performs one-dimensional concatenation of the two feature vectors, namely the first feature vector and the second feature vector, concatenating them into a concatenated feature vector f=[v.sub., f.sub.3]. The concatenated feature vector is then further processed through two layers of FCLs. After passing through the first layer of FCL, ReLU is used as the activation function, and a BN layer is used to accelerate training convergence and prevent gradient vanishing. After passing through the second layer of FCL, a softmax activation function is employed as the activation function to ensure that the sum of the output vectors from network equals 1.
(119) In the embodiment, the target extrinsic information-aided network is used to determine the target a-posteriori probability vector corresponding to each received M-ary DCSK symbol according to each received M-ary DCSK symbol and the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol.
(120) Step 304, each target a-posteriori probability vector is inputted into the a-priori calculation decoder for decoding, and an initial decoded bit sequence is outputted.
(121) The a-priori calculation decoder includes an a-priori LLR calculation block and a protograph decoder.
(122) Furthermore, step 304 can include the following sub-steps.
(123) S41, a-priori conversion is performed on each target a-posteriori probability vector via the a-priori LLR calculation block to determine an a-priori LLR vector corresponding to each target a-posteriori probability vector.
(124) S42, each a-priori LLR vector is inputted into the protograph decoder for protograph decoding to generate an a-posteriori LLR vector corresponding to each a-priori LLR vector.
(125) S43, hard decision is performed on each a-posteriori LLR vector to generate an initial decoded bit sequence.
(126) It is to be noted that, the above-mentioned target extrinsic information-aided network is used to output a more accurate target a-posteriori probability vector
(127)
according to the received M-ary DCSK symbol and the corresponding preset initial a-posteriori probability vector, where
(128)
represents the first element in the target a-posteriori probability vector in a case that the number of global iterations is g, and this element represents the a-posteriori probability of the first modulated M-ary DCSK symbol {tilde over (S)}.sub.1 outputted by the target extrinsic information-aided network;
(129)
represents the second element in the target a-posteriori probability vector in a case that the number of global iterations is g,
(130)
represents the M.sub.th element in the target a-posteriori probability vector in a case that the number of global iterations is g; and P({tilde over (S)}|r.sub.i).sup.(g) is the target a-posteriori probability vector in a case that the number of global iterations is g.
(131) Furthermore, the target a-posteriori probability vector is converted into an a-priori LLR vector via the a-priori LLR calculation block, the a-priori LLR vector is sent to the protograph decoder to obtain an a-posteriori LLR vector, and after performing hard decision on the a-posteriori LLR vector, an initial decoded bit sequence {tilde over (b)}.sub.g in a case that the number of global iterations is g is obtained.
(132) In the embodiment, each target a-posteriori probability vector is inputted into the a-priori calculation decoder for decoding, and an initial decoded bit sequence is outputted.
(133) Step 305, a multiplication operation is performed on the initial decoded bit sequence and the preset check matrix to determine a sequence-matrix product value, and the number of iterations is counted in real-time.
(134) It is to be noted that, a check matrix of the protograph code is defined, that is, the preset check matrix is H, and the multiplication operation is performed on the initial decoded bit sequence and the preset check matrix to determine the sequence-matrix product value.
(135) In the embodiment, the multiplication operation is performed on the initial decoded bit sequence and the preset check matrix to determine the sequence-matrix product value, and the number of iterations is counted in real-time.
(136) Step 306, the initial decoded bit sequence is taken as a target decoded bit sequence in a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations.
(137) It is to be noted that, if {tilde over (b)}.sub.g.Math.H=0, that is, the sequence-matrix product value {tilde over (b)}.sub.g.Math.H is 0, or g=I, that is, the number of iterations reaches the preset maximum number of global iterations, the iteration on the preset initial a-posteriori probability vector is stopped, and the initial decoded bit sequence is taken as the target decoded bit sequence, that is, {tilde over (b)}={tilde over (b)}.sub.g, where {tilde over (b)} is the target decoded bit sequence.
(138) In the embodiment, the initial decoded bit sequence is taken as the target decoded bit sequence in a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations.
(139) Step 307, a difference operation is separately performed on each a-priori LLR vector corresponding to each target a-posteriori probability vector and each a-posteriori LLR vector in a case that the sequence-matrix product value is not zero and the number of iterations does not reach the preset number of global iterations, and an extrinsic information vector corresponding to each target a-posteriori probability vector is outputted.
(140) It is to be noted that, if the sequence-matrix product value is not zero and the number of iterations does not reach the preset maximum number of global iterations, the extrinsic information
(141)
outputted by the decoder is taken as the input of the target extrinsic information-aided network to perform global iterations on the inputted preset initial a-posteriori probability vector. In other words, updating and iteration are performed on the preset initial a-posteriori probability vector.
(142) In the embodiment, the difference operation is separately performed on each a-priori LLR vector corresponding to each target a-posteriori probability vector and each a-posteriori LLR vector in a case that the sequence-matrix product value is not zero and the number of iterations does not reach the preset number of global iterations, and the extrinsic information vector corresponding to each target a-posteriori probability vector is outputted.
(143) Step 308, elements of each extrinsic information vector are used to update elements of the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol, and a new preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol is determined.
(144) In the embodiment, elements of each extrinsic information vector are used to update elements of the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol, and the new preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol is determined.
(145) Step 309, the sequence-matrix product value is determined according to each received M-ary DCSK symbol and each new preset initial a-posteriori probability vector until the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations.
(146) In the embodiment, the sequence-matrix product value is determined according to each received M-ary DCSK symbol and each new preset initial a-posteriori probability vector until the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations.
(147) Step 310, the initial decoded bit sequence determined in a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations is taken as the target decoded bit sequence.
(148) It is to be noted that, referring to
(149) In the embodiment, the initial decoded bit sequence determined in a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations is taken as the target decoded bit sequence.
(150) For the purpose of technical effect comparison, reference can be made to prior art. Referring to
(151)
(152) Furthermore, the a-priori LLR L.sub. is calculated using the observation vector. Subsequently, the protograph decoder utilizes the LLR for decoding (i.e., performing inner iterations) to obtain the decoded bit {tilde over (b)}. The decoder can feed back the extrinsic information (i.e., L.sub.e) to assist in the calculation of L.sub.a (i.e., performing global iterations), thereby achieving iterative receiving.
(153) On the above basis, all existing deep learning-based receivers designed for DCSK systems have not considered coded DCSK systems, that is, there is no deep learning-based receiver designed specifically for the protograph DCSK coded modulation systems. For instance, existing receivers based on the LSTM network are designed for orthogonal frequency division multiplexing DCSK (OFDM-DCSK) systems, failing to consider the characteristics of protograph DCSK coded modulation systems. The above-mentioned traditional iterative receiver for the protograph DCSK coded modulation system primarily has two major drawbacks. First, the demodulator employs energy detection as symbol detection, focusing solely on the energy characteristic of the received chaotic sequences, failing to utilize the correlation features that exist between the chaotic sequences, thereby affecting the performance of the system. Second, in a case of calculating the a-priori LLR, the traditional receiver requires the channel fading factor or the average energy gain of the transmission path, restricting the application of the protograph DCSK coded modulation system. In contrast, the above-mentioned deep learning-based receiver primarily has the drawback that it is not specifically designed for coded DCSK systems, failing to utilize the extrinsic information outputted by the decoder for global iterations, which affects the performance of the protograph DCSK coded modulation systems.
(154) In summary, to enhance the transmission reliability of the protograph DCSK coded modulation system, the disclosure provides a transceiver decoding method based on protograph DCSK, employing an extrinsic-information network-aided iterative decoding receiver (EICNet-ID receiver) for the protograph DCSK coded modulation system to improve the traditional iterative receivers and addressing the problem of existing deep learning-based receivers being unable to utilize extrinsic information for global iterations. Specifically, targeting the shortcomings of traditional iterative receivers for protograph DCSK coded modulation systems, which fail to utilize the correlation features between chaotic sequences and require channel state information for calculating a-priori LLRs, as well as the limitation of existing deep learning-based receivers in being unable to utilize extrinsic information, the provided target extrinsic information-aided network in the disclosure can utilize the correlation features between chaotic sequences, enabling more accurate a-priori LLRs without the need for channel information, thereby enhancing the reliability of the protograph DCSK coded modulation system. In addition, the provided target extrinsic information-aided network considers both the received M-ary DCSK symbol and its corresponding a-posteriori probability, allowing the network to utilize the extrinsic information fed back from the protograph decoder to update the input of the network, thereby further enhancing the reliability of the system and enabling iterative receiving, and the reliability of the system is significantly improved. Meanwhile, by enhancing the accuracy of the a-priori LLR, the error performance of the system is improved. Furthermore, the extrinsic information outputted by the decoder can be used for global iterations, to further improve the error performance. Comparing with the existing deep-learning based receiver, the structure of the target extrinsic information-aided network proposed in the disclosure can utilize the correlation features between chaotic sequences as well as the extrinsic information outputted by the decoder, which is not considered in existing deep-learning based receiver, and the provided extrinsic information receiver has significant performance advantages over existing receivers.
(155) In an embodiment of the disclosure, a transceiver decoding method based on protograph DCSK is provided. First, an information bit sequence is acquired, a coded modulation is performed on the information bit sequence via a protograph DCSK transmitter, and a plurality of modulated M-ary DCSK symbols are outputted; then, each modulated M-ary DCSK symbol is inputted into a wireless channel for channel interference, and a received M-ary DCSK symbol corresponding to each modulated M-ary DCSK symbol is outputted; a target extrinsic information-aided network is used to determine a target a-posteriori probability vector corresponding to each received M-ary DCSK symbol according to each received M-ary DCSK symbol and a preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol; each target a-posteriori probability vector is inputted into an a-priori calculation decoder for decoding, and an initial decoded bit sequence is outputted; and finally, a target decoded bit sequence is determined according to the initial decoded bit sequence and a preset check matrix. According to the above-mentioned solutions, in the process of generating the target decoded bit sequence on the basis of the initial decoded bit sequence determined by the target extrinsic information-aided network and the preset check matrix, both the received M-ary DCSK symbol and a-posteriori probability vector corresponding thereto are considered, thereby improving the performance of a protograph DCSK coded modulation system, and further improving the decoding accuracy.
(156) Referring to
(157) The disclosure provides a transceiver decoding system based on protograph DCSK. A transceiver includes a protograph DCSK transmitter, a wireless channel, a target extrinsic information-aided network and an a-priori calculation decoder. The system includes: an acquisition block 801, configured to acquire an information bit sequence, perform a coded modulation on the information bit sequence via the protograph DCSK transmitter, and output a plurality of modulated M-ary DCSK symbols; an interference block 802, configured to input each modulated M-ary DCSK symbol into the wireless channel for channel interference, and output a received M-ary DCSK symbol corresponding to each modulated M-ary DCSK symbol; a network output block 803, configured to use the target extrinsic information-aided network to determine a target a-posteriori probability vector corresponding to each received M-ary DCSK symbol according to each received M-ary DCSK symbol and a preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol; a decoding block 804, configured to input each target a-posteriori probability vector into the a-priori calculation decoder for decoding, and output an initial decoded bit sequence; and a determination block 805, configured to determine a target decoded bit sequence according to the initial decoded bit sequence and a preset check matrix.
(158) Furthermore, the protograph DCSK transmitter includes a protograph encoder and an M-ary DCSK modulator. The acquisition block 801 is specifically configured to: encode the information bit sequence using the protograph encoder to generate an encoded bit sequence; divide the encoded bit sequence according to a preset number of encoded bits, and output a plurality of groups of encoded bits; and input each encoded bit into the M-ary DCSK modulator for modulation to determine the plurality of modulated M-ary DCSK symbols.
(159) Furthermore, the target extrinsic information-aided network includes a CNN-LSTM block, an FC block and a decision block; and the network output block 803 includes: a first sub-block, configured to use the CNN-LSTM block to perform one-dimensional feature extraction on each received M-ary DCSK symbol, and generate a first feature vector corresponding to each received M-ary DCSK symbol; a second sub-block, configured to use the FC block to perform vector feature extraction on the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol, and determine a second feature vector corresponding to each preset initial a-posteriori probability vector; and a third sub-block, configured to input each first feature vector and each second feature vector into the decision block for feature concatenating separately, and output the target a-posteriori probability vector corresponding to each received M-ary DCSK symbol.
(160) Furthermore, the CNN-LSTM block includes a Conv1D layer, a BN layer, a first activation function layer and an LSTM layer; and the first sub-block is specifically configured to: perform a one-dimensional convolutional operation on each received M-ary DCSK symbol via the Conv1D layer to generate a first convolutional feature corresponding to each received M-ary DCSK symbol; normalize each first convolutional feature using the BN layer to determine a first normalized feature corresponding to each first convolutional feature; input each first normalized feature into the first activation function layer for nonlinear mapping to generate a nonlinear feature corresponding to each first normalized feature; perform the one-dimensional convolutional operation on each nonlinear feature via the Conv1D layer to generate a second convolutional feature corresponding to each nonlinear feature; normalize each second convolutional feature using the BN layer to determine a second normalized feature corresponding to each second convolutional feature; input each second normalized feature into the first activation function layer for nonlinear mapping to generate a two-dimensional feature matrix corresponding to each second normalized feature; sequentially input column data of each two-dimensional feature matrix into the LSTM layer for feature extraction according to a column sequence number of each two-dimensional feature matrix to generate a plurality of initial feature vectors corresponding to each two-dimensional feature matrix; and screen the plurality of initial feature vectors corresponding to each two-dimensional feature matrix to determine a first feature vector corresponding to each two-dimensional feature matrix.
(161) Furthermore, the FC block includes an FCL, a BN layer and a first activation function layer; and the second sub-block is specifically configured to: perform feature mapping on the preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol via the FCL to generate a first a-posteriori probability mapping feature corresponding to each preset initial a-posteriori probability vector; perform normalization and nonlinear mapping on each first a-posteriori probability mapping feature using the BN layer and the first activation function layer, respectively, to determine a first probability nonlinear feature corresponding to each first a-posteriori probability mapping feature; perform feature mapping on each first probability nonlinear feature via the FCL to generate a second a-posteriori probability mapping feature corresponding to each first probability nonlinear feature; perform normalization and nonlinear mapping on each second a-posteriori probability mapping feature using the BN layer and the first activation function layer, respectively, to determine a second probability nonlinear feature corresponding to each second a-posteriori probability mapping feature; perform feature mapping on each second probability nonlinear feature via the FCL to generate a third a-posteriori probability mapping feature corresponding to each second probability nonlinear feature; and perform normalization and nonlinear mapping on each third a-posteriori probability mapping feature using the BN layer and the first activation function layer, respectively, to determine a second feature vector corresponding to each third a-posteriori probability mapping feature.
(162) Furthermore, the decision block includes an FCL, a BN layer, a first activation function layer and a second activation function layer; and the third sub-block is specifically configured to: separately perform one-dimensional concatenation on each first feature vector and each second feature vector to generate a concatenated feature vector corresponding to each received M-ary DCSK symbol; perform feature mapping on each concatenated feature vector via the FCL to determine a first concatenated mapping feature corresponding to each concatenated feature vector; perform normalization and nonlinear mapping on each first concatenated mapping feature using the BN layer and the first activation function layer, respectively, to generate a first concatenated nonlinear feature corresponding to each first concatenated mapping feature; perform feature mapping on each first concatenated nonlinear feature via the FCL to determine a second concatenated mapping feature corresponding to each first concatenated nonlinear feature; and perform nonlinear mapping on each second concatenated mapping feature using the second activation function layer to generate a target a-posteriori probability vector corresponding to each second concatenated mapping feature.
(163) Furthermore, the a-priori calculation decoder includes an a-priori LLR calculation block and a protograph decoder; and the decoding block 804 is specifically configured to: perform a-priori conversion on each target a-posteriori probability vector via the a-priori LLR calculation block to determine an a-priori LLR vector corresponding to each target a-posteriori probability vector; input each a-priori LLR vector into the protograph decoder for protograph decoding to generate an a-posteriori LLR vector corresponding to each a-priori LLR vector; and perform hard decision on each a-posteriori LLR vector to generate an initial decoded bit sequence.
(164) Furthermore, the determination block 805 is specifically configured to: perform a multiplication operation on the initial decoded bit sequence and the preset check matrix to determine a sequence-matrix product value, and count the number of iterations in real-time; take the initial decoded bit sequence as a target decoded bit sequence in a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations; separately perform a difference operation on each a-priori LLR vector corresponding to each target a-posteriori probability vector and each a-posteriori LLR vector in a case that the sequence-matrix product value is not zero and the number of iterations does not reach the preset number of global iterations, and output an extrinsic information vector corresponding to each target a-posteriori probability vector; use elements of each extrinsic information vector to update elements of each preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol, and determine a new preset initial a-posteriori probability vector corresponding to each received M-ary DCSK symbol; determine the sequence-matrix product value according to each received M-ary DCSK symbol and each new preset initial a-posteriori probability vector until the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations; and take the initial decoded bit sequence determined in a case that the sequence-matrix product value is zero or the number of iterations reaches the preset number of global iterations as the target decoded bit sequence.
(165) In an alternative embodiment, the system further includes: a first block, configured to acquire an information bit sequence to be trained, perform a coded modulation on the information bit sequence to be trained via the protograph DCSK transmitter, and output a plurality of modulated M-ary DCSK symbols to be trained; a second block, configured to input each modulated M-ary DCSK symbol to be trained into the wireless channel for channel interference, and output a received M-ary DCSK symbol to be trained corresponding to each modulated M-ary DCSK symbol to be trained; a third block, configured to calculate an initial a-posteriori probability vector to be trained corresponding to each received M-ary DCSK symbol to be trained using a preset LLR calculation method according to each received M-ary DCSK symbol to be trained; and a fourth block, configured to, on the basis of a preset cross-entropy loss function, perform network training on an initial extrinsic information-aided network using each received M-ary DCSK symbol to be trained and each initial a-posteriori probability vector to be trained, and determine a well-trained target extrinsic information-aided network.
(166) A person skilled in the art will readily appreciate that, for brevity and conciseness in description, the specific operation process of the system, block, and sub-block described above can be referred to the corresponding process in the embodiments of the method mentioned above, which is hereby omitted.
(167) In the several embodiments provided in the present application, it is to be understood that the system and method disclosed can be realized in other ways. For example, the above-described embodiments of the system are merely schematic, and the division of the unit is merely a logical functional division, which can be actually realized in a different way. For example, a plurality of units or components can be combined or can be integrated into another system, or some features can be ignored, or not implemented. At another point, the mutual coupling, direct coupling, or communication connection shown or discussed can be an indirect coupling or communication connection through some interfaces devices or units, and can be electrical, mechanical or otherwise.
(168) The units described as illustrated as separated components can or cannot be physically separated, and the components shown as units can or cannot be physical units, i.e., they can be located in a single place or can also be distributed to a plurality of network units. Some or all of these units can be selected to fulfill the purpose of the solution of the embodiment according to actual needs.
(169) The embodiments described above are merely used for illustrating the technical solutions of the disclosure, rather than limiting the disclosure. Although the disclosure is described in detail by reference to the foregoing embodiments, it is to be understood by those ordinary skilled in the art that the technical solutions set forth in each embodiment can still be modified or some technical features can be replaced equivalently, and those modifications or replacements cannot make the essence of the corresponding technical solutions out of the spirit and scope of the technical solutions in each embodiment of the disclosure.