G10L2019/0001

Using Non-Parallel Voice Conversion for Speech Conversion Models

A method includes receiving a set of training utterances each including a non-synthetic speech representation of a corresponding utterance, and for each training utterance, generating a corresponding synthetic speech representation by using a voice conversion model. The non-synthetic speech representation and the synthetic speech representation form a corresponding training utterance pair. At each of a plurality of output steps for each training utterance pair, the method also includes generating, for output by a speech recognition model, a first probability distribution over possible non-synthetic speech recognition hypotheses for the non-synthetic speech representation and a second probability distribution over possible synthetic speech recognition hypotheses for the synthetic speech representation. The method also includes determining a consistent loss term for the corresponding training utterance pair based on the first and second probability distributions and updating parameters of the speech recognition model based on the consistent loss term.

Audio frame loss and recovery with redundant frames

An audio frame loss recovery method and apparatus are disclosed. In one implementation, data from some but not all audio frames is included in a redundant frame. The audio frames whose data is not included in the redundant frame may include multiple audio frames but may not include more than two consecutive audio frames. Because not all audio frames are used in the redundant frame, the amount of information needed to be transmitted in the redundant frame is reduced. A lost audio frame during transmission may be recovered from either the redundant frame when the redundant frame incudes data of the lost frame, or from at least one neighboring frame of the lost frame derived from either the redundant frame or the successfully transmitted audio frames when the redundant frame does not include data of the lost frame.

Reordering Of Audio Objects In The Ambisonics Domain
20220030372 · 2022-01-27 ·

In general, disclosed is a device that includes one or more processors, coupled to the memory, configured to perform an energy analysis with respect to one or more audio objects, in the ambisonics domain, in the first time segment. The one or more processors are also configured to perform a similarity measure between the one or more audio objects, in the ambisonics domain, in the first time segment, and the one or more audio objects, in the ambisonics domain, in the second time segment. In addition, the one or more processors are configured to perform a reorder of the one or more audio objects, in the ambisonics domain, in the first time segment with the one or more audio objects, in the ambisonics domain, in the second time segment, to generate one or more reordered audio objects in the first time segment.

METHODS AND DEVICES FOR VECTOR SEGMENTATION FOR CODING

A method for partitioning of input vectors for coding is presented. The method comprises obtaining of an input vector. The input vector is segmented, in a non-recursive manner, into an integer number, N.sup.SEG, of input vector segments. A representation of a respective relative energy difference between parts of the input vector on each side of each boundary between the input vector segments is determined, in a recursive manner. The input vector segments and the representations of the relative energy differences are provided for individual coding. Partitioning units and computer programs for partitioning of input vectors for coding, as well as positional encoders, are presented.

Speech coding using content latent embedding vectors and speaker latent embedding vectors

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating discrete latent representations of input audio data. Only the discrete latent representation needs to be transmitted from an encoder system to a decoder system in order for the decoder system to be able to effectively to decode, i.e., reconstruct, the input audio data.

Real time digital voice communication method
11640826 · 2023-05-02 ·

A communication system includes at least one first device and at least one second device which are linked in a manner that enables data transfer with each other. The first device enables the speech signal that it receives as the input to be expressed in terms of the energy functions representing the energy patterns, information functions representing the information patterns and the noise functions of the frames of the real speech samples; and transfers the indexes of these functions in the database and the frame gain factor of each frame to the second device. The second device finds the functions via the indexes from the copy database which is a copy of the database and reconstructs the speech signal by these functions and the frame gain factor, enabling it to be provided as the voice output.

Flexible rendering of audio data

In general, techniques are described for obtaining audio rendering information from a bitstream. A method of rendering audio data includes receiving, at an interface of a device, an encoded audio bitstream, storing, to a memory of the device, encoded audio data of the encoded audio bitstream, parsing, by one or more processors of the device, a portion of the encoded audio data stored to the memory to select a renderer for the encoded audio data, the selected renderer comprising one of an object-based renderer or an ambisonic renderer, rendering, by the one or more processors of the device, the encoded audio data using the selected renderer to generate one or more rendered speaker feeds, and outputting, by one or more loudspeakers of the device, the one or more rendered speaker feeds.

AUDIO ENCODER AND DECODER USING A FREQUENCY DOMAIN PROCESSOR , A TIME DOMAIN PROCESSOR, AND A CROSS PROCESSING FOR CONTINUOUS INITIALIZATION

An audio encoder for encoding an audio signal includes: a first encoding processor for encoding a first audio signal portion in a frequency domain, wherein the first encoding processor includes: a time frequency converter for converting the first audio signal portion into a frequency domain representation having spectral lines up to a maximum frequency of the first audio signal portion; a spectral encoder for encoding the frequency domain representation; a second encoding processor for encoding a second different audio signal portion in the time domain; a cross-processor for calculating, from the encoded spectral representation of the first audio signal portion, initialization data of the second encoding processor, so that the second encoding processing is initialized to encode the second audio signal portion immediately following the first audio signal portion in time in the audio signal; a controller configured for analyzing the audio signal and for determining, which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain; and an encoded signal former for forming an encoded audio signal including a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion.

SPEECH CODING USING CONTENT LATENT EMBEDDING VECTORS AND SPEAKER LATENT EMBEDDING VECTORS

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating discrete latent representations of input audio data. Only the discrete latent representation needs to be transmitted from an encoder system to a decoder system in order for the decoder system to be able to effectively to decode, i.e., reconstruct, the input audio data.

APPARATUS FOR ENCODING A SPEECH SIGNAL EMPLOYING ACELP IN THE AUTOCORRELATION DOMAIN

An apparatus for encoding a speech signal by determining a codebook vector of a speech coding algorithm is provided. The apparatus includes a matrix determiner for determining an autocorrelation matrix R, and a codebook vector determiner for determining the codebook vector depending on the autocorrelation matrix R. The matrix determiner is configured to determine the autocorrelation matrix R by determining vector coefficients of a vector r, wherein the autocorrelation matrix R includes a plurality of rows and a plurality of columns, wherein the vector r indicates one of the columns or one of the rows of the autocorrelation matrix R, wherein R(i, j)=r(|i−j|), wherein R(i, j) indicates the coefficients of the autocorrelation matrix R, wherein i is a first index indicating one of a plurality of rows of the autocorrelation matrix R, and wherein j is a second index indicating one of the plurality of columns of the autocorrelation matrix R.