Patent classifications
G10L2019/0001
AUDIO SIGNAL ENCODING AND DECODING METHOD USING NEURAL NETWORK MODEL, AND ENCODER AND DECODER FOR PERFORMING THE SAME
An audio signal encoding and decoding method using a neural network model, and an encoder and decoder for performing the same are disclosed. A method of encoding an audio signal using a neural network model, the method may include identifying an input signal, generating a quantized latent vector by inputting the input signal into a neural network model encoding the input signal, and generating a bitstream corresponding to the quantized latent vector, wherein the neural network model may include i) a feature extraction layer generating a latent vector by extracting a feature of the input signal, ii) a plurality of downsampling blocks downsampling the latent vector, and iii) a plurality of quantization blocks performing quantization of a downsampled latent vector.
SELECTION OF QUANTISATION SCHEMES FOR SPATIAL AUDIO PARAMETER ENCODING
There is disclosed inter alia an apparatus for spatial audio signal encoding comprising means for receiving for each time frequency block of a sub band of an audio frame a spatial audio parameter comprising an azimuth and an elevation; determining a first distortion measure for the audio frame by determining a first distance measure for each time frequency block and summing the first distance measure for each time frequency block; determining a second distortion measure for the audio frame by determining a second distance measure for each time frequency block and summing the second distance measure for each time frequency block, and selecting either the first quantization scheme or the second quantization scheme for quantising the elevation and the azimuth for all time frequency blocks of the sub band of the audio frame, wherein the selecting is dependent on the first and second distortion measures.
Quantization of spatial audio parameters
There is disclosed inter alia an apparatus for spatial audio signal encoding which determines at least one spatial audio parameter comprising a direction parameter with an elevation component and an azimuth component. The elevation component and azimuth component of the direction parameter are then converted to an index value.
Selection of quantisation schemes for spatial audio parameter encoding
There is disclosed inter alia an apparatus for spatial audio signal encoding comprising means for receiving for each time frequency block of a sub band of an audio frame a spatial audio parameter comprising an azimuth and an elevation; determining a first distortion measure for the audio frame by determining a first distance measure for each time frequency block and summing the first distance measure for each time frequency block; determining a second distortion measure for the audio frame by determining a second distance measure for each time frequency block and summing the second distance measure for each time frequency block, and selecting either the first quantization scheme or the second quantization scheme for quantising the elevation and the azimuth for all time frequency blocks of the sub band of the audio frame, wherein the selecting is dependent on the first and second distortion measures.
Methods and devices for vector segmentation for coding
A method for partitioning of input vectors for coding is presented. The method comprises obtaining of an input vector. The input vector is segmented, in a non-recursive manner, into an integer number, N.sup.SEG, of input vector segments. A representation of a respective relative energy difference between parts of the input vector on each side of each boundary between the input vector segments is determined, in a recursive manner. The input vector segments and the representations of the relative energy differences are provided for individual coding. Partitioning units and computer programs for partitioning of input vectors for coding, as well as positional encoders, are presented.
Methods, apparatus and articles of manufacture to identify sources of network streaming services
Methods, apparatus and articles of manufacture to identify sources of network streaming services are disclosed. An example apparatus includes a coding format identifier to identify, from a received first audio signal representing a decompressed second audio signal, an audio compression configuration used to compress a third audio signal to form the second audio signal, and a source identifier to identify a source of the second audio signal based on the identified audio compression configuration.
Coding vectors decomposed from higher-order ambisonics audio signals
In general, techniques are described for coding of vectors decomposed from higher order ambisonic coefficients. A device comprising a processor and a memory may perform the techniques. The processor may be configured to obtain from a bitstream data indicative of a plurality of weight values that represent a vector that is included in a decomposed version of the plurality of HOA coefficients. Each of the weight values may correspond to a respective one of a plurality of weights in a weighted sum of code vectors that represents the vector and that includes a set of code vectors. The processor may further be configured to reconstruct the vector based on the weight values and the code vectors. The memory may be configured to store the reconstructed vector.
Frequency band table design for high frequency reconstruction algorithms
The present document relates to audio encoding and decoding. In particular, the present document relates to audio coding schemes which make use of high frequency reconstruction (HFR) methods. A system configured to determine a master scale factor band table of a highband signal (105) of an audio signal is described. The highband signal (105) is to be generated from a lowband signal (101) of the audio signal using a high frequency reconstruction (HFR) scheme. The master scale factor band table is indicative of a frequency resolution of a spectral envelope of the highband signal (105).
Apparatus for encoding a speech signal employing ACELP in the autocorrelation domain
An apparatus for encoding a speech signal by determining a codebook vector of a speech coding algorithm is provided. The apparatus includes a matrix determiner for determining an autocorrelation matrix R, and a codebook vector determiner for determining the codebook vector depending on the autocorrelation matrix R. The matrix determiner is configured to determine the autocorrelation matrix R by determining vector coefficients of a vector r, wherein the autocorrelation matrix R includes a plurality of rows and a plurality of columns, wherein the vector r indicates one of the columns or one of the rows of the autocorrelation matrix R, wherein R(i, j)=r(|i−j|), wherein R(i, j) indicates the coefficients of the autocorrelation matrix R, wherein i is a first index indicating one of a plurality of rows of the autocorrelation matrix R, and wherein j is a second index indicating one of the plurality of columns of the autocorrelation matrix R.
Speech coding using content latent embedding vectors and speaker latent embedding vectors
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating discrete latent representations of input audio data. Only the discrete latent representation needs to be transmitted from an encoder system to a decoder system in order for the decoder system to be able to effectively to decode, i.e., reconstruct, the input audio data.