G10L25/30

A METHOD FOR TRAINING A NEURAL NETWORK TO DESCRIBE AN ENVIRONMENT ON THE BASIS OF AN AUDIO SIGNAL, AND THE CORRESPONDING NEURAL NETWORK

A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.

A METHOD FOR TRAINING A NEURAL NETWORK TO DESCRIBE AN ENVIRONMENT ON THE BASIS OF AN AUDIO SIGNAL, AND THE CORRESPONDING NEURAL NETWORK

A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.

AUDIO ENCODING METHOD, AUDIO DECODING METHOD, APPARATUS, COMPUTER DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
20230046509 · 2023-02-16 ·

An audio encoding bit rate prediction model training method is performed by a computer device. The method includes: obtaining a sample audio feature parameter corresponding to each of sample audio frames in a first sample audio; performing encoding bit rate prediction on the sample audio feature parameter through an encoding bit rate prediction model, to obtain a sample encoding bit rate for each of the sample audio frames; performing audio encoding on the sample audio frames based on the corresponding sample encoding bit rates to generate sample audio data corresponding to the sample audio frames; performing audio decoding on the sample audio data, to obtain a second sample audio corresponding to the sample audio data; and training the encoding bit rate prediction model based on the first sample audio and the second sample audio until a sample encoding quality score reaches a target encoding quality score.

SPEECH ENHANCEMENT METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
20230050519 · 2023-02-16 ·

A speech enhancement method includes: determining a glottal parameter corresponding to a target speech frame according to a frequency domain representation of the target speech frame; determining a gain corresponding to the target speech frame according to a gain corresponding to a historical speech frame of the target speech frame; determining an excitation signal corresponding to the target speech frame according to the frequency domain representation of the target speech frame; and synthesizing the glottal parameter corresponding to the target speech frame, the gain corresponding to the target speech frame, and the excitation signal corresponding to the target speech frame, to obtain an enhanced speech signal corresponding to the target speech frame.

SPEECH ENHANCEMENT METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
20230050519 · 2023-02-16 ·

A speech enhancement method includes: determining a glottal parameter corresponding to a target speech frame according to a frequency domain representation of the target speech frame; determining a gain corresponding to the target speech frame according to a gain corresponding to a historical speech frame of the target speech frame; determining an excitation signal corresponding to the target speech frame according to the frequency domain representation of the target speech frame; and synthesizing the glottal parameter corresponding to the target speech frame, the gain corresponding to the target speech frame, and the excitation signal corresponding to the target speech frame, to obtain an enhanced speech signal corresponding to the target speech frame.

METHODS OF ENCODING AND DECODING, ENCODER AND DECODER PERFORMING THE METHODS

Provided is an encoding method according to various example embodiments and an encoder performing the method. The encoding method includes outputting a linear prediction(LP) coefficients bitstream and a residual signal by performing a linear prediction analysis on an input signal, outputting a first latent signal obtained by encoding a periodic component of the residual signal, using a first neural network module, outputting a first bitstream obtained by quantizing the first latent signal, using a quantization module, outputting a second latent signal obtained by encoding an aperiodic component of the residual signal, using the first neural network module, and outputting a second bitstream obtained by quantizing the second latent signal, using the quantization module, wherein the aperiodic component of the residual signal is calculated based on a periodic component of the residual signal decoded from the quantized first latent signal output by de-quantizing the first bitstream.

Multimodal based punctuation and/or casing prediction

Techniques for predicting punctuation and casing using multimodal fusion are described. An exemplary method includes processing generated text by: tokenizing the generated text into sub-words, and generating a sequence of lexical features for the sub-words using a pre-trained lexical encoder; processing audio of the audio by: generating a sequence of frame level acoustic embeddings using a pre-trained acoustic encoder on the audio, and generating task specific embeddings from the frame level acoustic embeddings; performing multimodal fusion of the sub-word level acoustic embeddings and the sequence of lexical features by: aligning the task specific embeddings to the sequence of lexical features, and combining the sequence of lexical features and aligned acoustic sequence; predicting punctuation and casing from the combined sequence of lexical features and aligned acoustic sequence; concatenating the sub-words of the text, and applying the predicted punctuation and casing; and outputting text having the predicted punctuation and casing.

Speech feature extraction apparatus, speech feature extraction method, and computer-readable storage medium

A speech feature extraction apparatus 100 includes a voice activity detection unit 103 that drops non-voice frames from frames corresponding to an input speech utterance, and calculates a posterior of being voiced for each frame, a voice activity detection process unit 106 calculates a function value as weights in pooling frames to produce an utterance-level feature, from a given a voice activity detection posterior, and an utterance-level feature extraction unit 112 that extracts an utterance-level feature, from the frame on a basis of multiple frame-level features, using the function values.

Speech feature extraction apparatus, speech feature extraction method, and computer-readable storage medium

A speech feature extraction apparatus 100 includes a voice activity detection unit 103 that drops non-voice frames from frames corresponding to an input speech utterance, and calculates a posterior of being voiced for each frame, a voice activity detection process unit 106 calculates a function value as weights in pooling frames to produce an utterance-level feature, from a given a voice activity detection posterior, and an utterance-level feature extraction unit 112 that extracts an utterance-level feature, from the frame on a basis of multiple frame-level features, using the function values.

Emitting word timings with end-to-end models

A method includes receiving a training example that includes audio data representing a spoken utterance and a ground truth transcription. For each word in the spoken utterance, the method also includes inserting a placeholder symbol before the respective word identifying a respective ground truth alignment for a beginning and an end of the respective word, determining a beginning word piece and an ending word piece, and generating a first constrained alignment for the beginning word piece and a second constrained alignment for the ending word piece. The first constrained alignment is aligned with the ground truth alignment for the beginning of the respective word and the second constrained alignment is aligned with the ground truth alignment for the ending of the respective word. The method also includes constraining an attention head of a second pass decoder by applying the first and second constrained alignments.