Patent classifications
H04L25/03165
LEARNING IN COMMUNICATION SYSTEMS
A method, apparatus and computer program are described includes obtaining or generating a transmitter-training sequence of messages for a first transmitter of a first module of a transmission system, wherein the transmission system includes the first module having the first transmitter and a first receiver, a second module having a second transmitter and a second receiver, and a channel, wherein the first transmitter includes a transmitter algorithm having at least some trainable weights; transmitting a perturbed version of the transmitter-training sequence of messages from the first transmitter to the second receiver over the channel of the transmission system; receiving a first loss function at the first receiver from the second transmitter, wherein the first loss function is based on the transmitted perturbed versions of the transmitter-training sequence of messages as received at the second receiver and knowledge of the transmitter-training sequence of messages for the first transmitter of the transmission system; and training at least some weights of the transmitter algorithm of the first transmitter based on the first loss function.
MIXING COEFFICIENT DATA FOR PROCESSING MODE SELECTION
Examples described herein include systems and methods which include wireless devices and systems with examples of mixing input data delayed versions of at least a portion of the respective processing results with coefficient data specific to a processing mode selection. For example, a computing system with processing units may mix the input data delayed versions of respective outputs of various layers of multiplication/accumulation processing units (MAC units) for a transmission in a radio frequency (RF) wireless domain with the coefficient data to generate output data that is representative of the transmission being processed according to a wireless processing mode selection. In another example, such mixing input data with delayed versions of processing results may be to receive and process noisy wireless input data. Examples of systems and methods described herein may facilitate the processing of data for 5G wireless communications in a power-efficient and time-efficient manner.
Method and apparatus for bandwidth filtering based on deep learning, server and storage medium
Embodiments of the present disclosure relate to the field of communication technologies, and provide a method and an apparatus for bandwidth filtering based on deep learning, a server and a storage medium. In the present disclosure, a bandwidth data of a server is obtained in real time (101), the obtained bandwidth data is input into a deep neural network model (102), and an output result of the deep neural network model is taken as filtered output bandwidth data obtained after filtering the input bandwidth data (103), where the deep neural network model is obtained through training according to historical bandwidth data and output bandwidth data obtained after filtering the historical bandwidth data.
Orthogonal frequency-division multiplexing equalization using deep neural network
Orthogonal frequency-division multiplexing (OFDM) equalization using a Deep Neural Network (DNN) may be provided. First, a signal in a packet structure may be received at an OFDM receiver from an OFDM transmitter. The signal may have distortion. Training constellation points, pilot constellation points, and data constellation points may be extracted from the signal based on the packet structure. Each data constellation point may correspond to a data subcarrier within a data symbol of the signal. Next, the training constellation points and the pilot constellation may be provided as input for the data symbol to a DNN. A coefficient for each data subcarrier within the data symbol that reverses the distortion may be received as output from the DNN. Then, the coefficient for each data subcarrier may be applied to the corresponding data constellation point to determine a per subcarrier constellation point prediction.
APPARATUS AND METHOD FOR SELF-INTERFERENCE SIGNAL CANCELLATION
The disclosure relates to a communication technique and a system for combining a 5G communication system with IoT technology to support a higher data rate after a 4G system. Based on 5G communication and IoT-related technologies, the disclosure may be applied to intelligent services such as smart homes, smart buildings, smart cities, smart or connected cars, healthcare, digital education, retail, and security and safety related services. The disclosure provides a method and apparatus that enable a communication device supporting full duplex to cancel the self-interference signal in the digital domain.
Integrating Volterra series model and deep neural networks to equalize nonlinear power amplifiers
The nonlinearity of power amplifiers (PAs) has been a severe constraint in performance of modern wireless transceivers. This problem is even more challenging for the fifth generation (5G) cellular system since 5G signals have extremely high peak to average power ratio. Nonlinear equalizers that exploit both deep neural networks (DNNs) and Volterra series models are provided to mitigate PA nonlinear distortions. The DNN equalizer architecture consists of multiple convolutional layers. The input features are designed according to the Volterra series model of nonlinear PAs. This enables the DNN equalizer to effectively mitigate nonlinear PA distortions while avoiding over-fitting under limited training data. The non-linear equalizers demonstrate superior performance over conventional nonlinear equalization approaches.
GRADIENT DATASET AWARE CONFIGURATION FOR OVER-THE-AIR (OTA) MODEL AGGREGATION IN FEDERATED LEARNING
A method performed by a user equipment (UE) generates local gradients for a federated learning task. The method calculates a gradient Sum-Power level based on the local gradients. The method receives a mapping between gradient Sum-Power levels and scaling factors for channel inversion coefficients to process a data block into an unencoded uplink signal. The method also determines a channel inversion coefficient based on a scaling factor obtained from the mapping and the calculated gradient Sum-Power level. The method applies analog modulation and the channel inversion coefficient to the data block to form the unencoded uplink signal. The method further transmits, to a network, the unencoded uplink signal on shared uplink resources for an over-the-air computation of global gradients for the federated learning task.
METHOD AND DEVICE FOR CHANNEL EQUALIZATION, AND COMPUTER-READABLE MEDIUM
Embodiments of the present disclosure provide a method, device, and computer readable medium for channel equalization. The method comprises receiving, at a first device, a first signal from a second device via a plurality of subcarriers over a communication channel; sampling the first signal to obtain sampled symbols; and generating a second signal based on the obtained sampled symbols using a direct association between sampled symbols and payloads, the second signal indicating a payload of the first signal carried on an effective subcarrier of the plurality of subcarriers. Through the use of the direct association between sampled symbols and payloads, it is possible to achieve channel equalization in a less complicated, more reliable, and cost-effective manner, so as to extract the payload in the received signal.
ORTHOGONAL FREQUENCY-DIVISION MULTIPLEXING EQUALIZATION USING DEEP NEURAL NETWORK
Orthogonal frequency-division multiplexing (OFDM) equalization using a Deep Neural Network (DNN) may be provided. First, a signal in a packet structure may be received at an OFDM receiver from an OFDM transmitter. The signal may have distortion. Training constellation points, pilot constellation points, and data constellation points may be extracted from the signal based on the packet structure. Each data constellation point may correspond to a data subcarrier within a data symbol of the signal. Next, the training constellation points and the pilot constellation may be provided as input for the data symbol to a DNN. A coefficient for each data subcarrier within the data symbol that reverses the distortion may be received as output from the DNN. Then, the coefficient for each data subcarrier may be applied to the corresponding data constellation point to determine a per subcarrier constellation point prediction.
Mixing coefficient data for processing mode selection
Examples described herein include systems and methods which include wireless devices and systems with examples of mixing input data delayed versions of at least a portion of the respective processing results with coefficient data specific to a processing mode selection. For example, a computing system with processing units may mix the input data delayed versions of respective outputs of various layers of multiplication/accumulation processing units (MAC units) for a transmission in a radio frequency (RF) wireless domain with the coefficient data to generate output data that is representative of the transmission being processed according to a wireless processing mode selection. In another example, such mixing input data with delayed versions of processing results may be to receive and process noisy wireless input data. Examples of systems and methods described herein may facilitate the processing of data for 5G wireless communications in a power-efficient and time-efficient manner.