Patent classifications
G10L19/012
ENHANCED USER EXPERIENCE THROUGH BI-DIRECTIONAL AUDIO AND VISUAL SIGNAL GENERATION
In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted. In aspects of the present disclosure, the set of codebooks comprise a visual codebook, an audio codebook and a correlation codebook. These codebooks are then used to create an visual image from a sound signal and/or a sound signal from a visual image.
ENHANCED USER EXPERIENCE THROUGH BI-DIRECTIONAL AUDIO AND VISUAL SIGNAL GENERATION
In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted. In aspects of the present disclosure, the set of codebooks comprise a visual codebook, an audio codebook and a correlation codebook. These codebooks are then used to create an visual image from a sound signal and/or a sound signal from a visual image.
METHODS AND APPARATUSES FOR DTX HANGOVER IN AUDIO CODING
Transmitting node and receiving node for audio coding and methods therein. The nodes being operable to encode/decode speech and to apply a discontinuous transmission (DTX) scheme comprising transmission/reception of Silence Insertion Descriptor (SID) frames during speech inactivity. The method in the transmitting node comprising determining, from amongst a number N of hangover frames, a set Y of frames being representative of background noise, and further transmitting the N hangover frames, comprising at least said set Y of frames, to the receiving node. The method further comprises transmitting a first SID frame to the receiving node in association with the transmission of the N hangover frames, where the SID frame comprises information indicating the determined set Y of hangover frames to the receiving node. The method enables the receiving node to generate comfort noise based on the hangover frames most adequate for the purpose.
METHODS AND APPARATUSES FOR DTX HANGOVER IN AUDIO CODING
Transmitting node and receiving node for audio coding and methods therein. The nodes being operable to encode/decode speech and to apply a discontinuous transmission (DTX) scheme comprising transmission/reception of Silence Insertion Descriptor (SID) frames during speech inactivity. The method in the transmitting node comprising determining, from amongst a number N of hangover frames, a set Y of frames being representative of background noise, and further transmitting the N hangover frames, comprising at least said set Y of frames, to the receiving node. The method further comprises transmitting a first SID frame to the receiving node in association with the transmission of the N hangover frames, where the SID frame comprises information indicating the determined set Y of hangover frames to the receiving node. The method enables the receiving node to generate comfort noise based on the hangover frames most adequate for the purpose.
SUPPORT FOR GENERATION OF COMFORT NOISE, AND GENERATION OF COMFORT NOISE
A method for generation of comfort noise for at least two audio channels. The method comprises determining a spatial coherence between audio signals on the respective audio channels, wherein at least one spatial coherence value per frame and frequency band is determined to form a vector of spatial coherence values. A vector of predicted spatial coherence values is formed by a weighted combination of a first coherence prediction and a second coherence prediction that are combined using a weight factor α. The method comprises signaling information about the weight factor α to the receiving node, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node.
SUPPORT FOR GENERATION OF COMFORT NOISE, AND GENERATION OF COMFORT NOISE
A method for generation of comfort noise for at least two audio channels. The method comprises determining a spatial coherence between audio signals on the respective audio channels, wherein at least one spatial coherence value per frame and frequency band is determined to form a vector of spatial coherence values. A vector of predicted spatial coherence values is formed by a weighted combination of a first coherence prediction and a second coherence prediction that are combined using a weight factor α. The method comprises signaling information about the weight factor α to the receiving node, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node.
Methods and apparatuses for DTX hangover in audio coding
Transmitting node and receiving node for audio coding and methods therein. The nodes being operable to encode/decode speech and to apply a discontinuous transmission (DTX) scheme comprising transmission/reception of Silence Insertion Descriptor (SID) frames during speech inactivity. The method in the transmitting node comprising determining, from amongst a number N of hangover frames, a set Y of frames being representative of background noise, and further transmitting the N hangover frames, comprising at least said set Y of frames, to the receiving node. The method further comprises transmitting a first SID frame to the receiving node in association with the transmission of the N hangover frames, where the SID frame comprises information indicating the determined set Y of hangover frames to the receiving node. The method enables the receiving node to generate comfort noise based on the hangover frames most adequate for the purpose.
Methods and apparatuses for DTX hangover in audio coding
Transmitting node and receiving node for audio coding and methods therein. The nodes being operable to encode/decode speech and to apply a discontinuous transmission (DTX) scheme comprising transmission/reception of Silence Insertion Descriptor (SID) frames during speech inactivity. The method in the transmitting node comprising determining, from amongst a number N of hangover frames, a set Y of frames being representative of background noise, and further transmitting the N hangover frames, comprising at least said set Y of frames, to the receiving node. The method further comprises transmitting a first SID frame to the receiving node in association with the transmission of the N hangover frames, where the SID frame comprises information indicating the determined set Y of hangover frames to the receiving node. The method enables the receiving node to generate comfort noise based on the hangover frames most adequate for the purpose.
Estimation of background noise in audio signals
Background noise estimators and methods are disclosed for estimating background noise in an audio signal. Some methods include obtaining at least one parameter associated with an audio signal segment, such as a frame or part of a frame, based on a first linear prediction gain, calculated as a quotient between a residual signal from a 0th-order linear prediction and a residual signal from a 2nd-order linear prediction for the audio signal segment. A second linear prediction gain is calculated as a quotient between a residual signal from a 2nd-order linear prediction and a residual signal from a 16th-order linear prediction for the audio signal segment. Whether the audio signal segment comprises a pause is determined based at least on the obtained at least one parameter; and a background noise estimate is updated based on the audio signal segment when the audio signal segment comprises a pause.
Estimation of background noise in audio signals
Background noise estimators and methods are disclosed for estimating background noise in an audio signal. Some methods include obtaining at least one parameter associated with an audio signal segment, such as a frame or part of a frame, based on a first linear prediction gain, calculated as a quotient between a residual signal from a 0th-order linear prediction and a residual signal from a 2nd-order linear prediction for the audio signal segment. A second linear prediction gain is calculated as a quotient between a residual signal from a 2nd-order linear prediction and a residual signal from a 16th-order linear prediction for the audio signal segment. Whether the audio signal segment comprises a pause is determined based at least on the obtained at least one parameter; and a background noise estimate is updated based on the audio signal segment when the audio signal segment comprises a pause.