Patent classifications
G10L19/0018
NEURAL NETWORK MODEL FOR GENERATION OF COMPRESSED HAPTIC ACTUATOR SIGNAL FROM AUDIO INPUT
A method comprises inputting an audio signal into a machine learning circuit to compress the audio signal into a sequence of actuator signals. The machine learning circuit being trained by: receiving a training set of acoustic signals and pre-processing the training set of acoustic signals into pre-processed audio data. The pre-processed audio data including at least a spectrogram. The training further includes training the machine learning circuit using the pre-processed audio data. The neural network has a cost function based on a reconstruction error and a plurality of constraints. The machine learning circuit generates a sequence of haptic cues corresponding to the audio input. The sequence of haptic cues is transmitted to a plurality of cutaneous actuators to generate a sequence of haptic outputs.
INTERACTION METHOD AND ELECTRONIC DEVICE
The present disclosure provides an interaction method and an electronic device. The method includes: receiving a target conversion message input by a user; converting the target conversation message into a first phoneme sequence; performing phoneme coding on the first phoneme sequence according to a first phoneme conversion rule corresponding to a first conference group to obtain a second phoneme sequence; and sending the second phoneme sequence to a first receiving terminal of the first conference group, wherein the target conversation message is a voice message or a text message.
Phase reconstruction in a speech decoder
Innovations in phase quantization during speech encoding and phase reconstruction during speech decoding are described. For example, to encode a set of phase values, a speech encoder omits higher-frequency phase values and/or represents at least some of the phase values as a weighted sum of basis functions. Or, as another example, to decode a set of phase values, a speech decoder reconstructs at least some of the phase values using a weighted sum of basis functions and/or reconstructs lower-frequency phase values then uses at least some of the lower-frequency phase values to synthesize higher-frequency phase values. In many cases, the innovations improve the performance of a speech codec in low bitrate scenarios, even when encoded data is delivered over a network that suffers from insufficient bandwidth or transmission quality problems.
Hierarchical encoder for speech conversion system
A speech conversion system is described that includes a hierarchical encoder and a decoder. The system may comprise a processor and memory storing instructions executable by the processor. The instructions may comprise to: using a second recurrent neural network (RNN) (GRU1) and a first set of encoder vectors derived from a spectrogram as input to the second RNN, determine a second concatenated sequence; determine a second set of encoder vectors by doubling a stack height and halving a length of the second concatenated sequence; using the second set of encoder vectors, determine a third set of encoder vectors; and decode the third set of encoder vectors using an attention block.
ELECTRONIC DEVICE AND CONTROL METHOD THEREOF
The electronic device may include a communication interface; a memory configured to store a first neural network model; and a processor configured to: receive, from an external electronic device via the communication interface, compressed information related to an acoustic feature obtained based on a text; decompress the compressed information to obtain decompressed information; and obtain sound information corresponding to the text by inputting the decompressed information into the first neural network model. The first neural network model may be obtained by training a relationship between a plurality of sample acoustic features and a plurality of sample sounds corresponding to the plurality of sample acoustic features.
Duration informed attention network (DURIAN) for audio-visual synthesis
A method and apparatus include receiving a text input that includes a sequence of text components. Respective temporal durations of the text components are determined using a duration model. A spectrogram frame is generated based on the duration model. An audio waveform is generated based on the spectrogram frame. Video information is generated based on the audio waveform. The audio waveform is provided as an output along with a corresponding video.
Deliberation Model-Based Two-Pass End-To-End Speech Recognition
A method of performing speech recognition using a two-pass deliberation architecture includes receiving a first-pass hypothesis and an encoded acoustic frame and encoding the first-pass hypothesis at a hypothesis encoder. The first-pass hypothesis is generated by a recurrent neural network (RNN) decoder model for the encoded acoustic frame. The method also includes generating, using a first attention mechanism attending to the encoded acoustic frame, a first context vector, and generating, using a second attention mechanism attending to the encoded first-pass hypothesis, a second context vector. The method also includes decoding the first context vector and the second context vector at a context vector decoder to form a second-pass hypothesis
Virtualized Speech in a Distributed Network Environment
Aspects of the disclosure relate to various systems and techniques that provide for a method and apparatus for transmitting speech as text to a remote server and converting the text stream back to speech for delivery to a remote application. For example, a person, through workspace virtualization, is accessing a remote application that accepts speech as its input. The user, using a microphone, would speak into the microphone where the speech would be converted into text with a local speech-to-text converter. The text version of speech is sent to a remote server, which converts the text back to speech using a remote server based text-to-speech converter where the reconstructed speech is usable as input to a remote application or device.
VOICE CONVERSION AND VERIFICATION
Method, system and computer program product, the method comprising: receiving a first audio, wherein the first audio is a conversion of an audio by a first source to a second source, wherein the first audio having embedded therein first information characterizing the first source of the audio; extracting from the first audio the first information of the first source embedded within the first audio; obtaining second information characterizing a third source; comparing the first information to the second information to obtain comparison results; and subject to the comparison results indicating that the first source is the same as the third source, initiating an action.
Caption assisted calling to maintain connection in challenging network conditions
Systems are provided for managing and coordinating STT/TTS systems and the communications between these systems when they are connected in online meetings and for mitigating connectivity issues that may arise during the online meetings to provide a seamless and reliable meeting experience with either live captions and/or rendered audio. Initially, online meeting communications are transmitted over a lossy connectionless type protocol/channel. Then, in response to detected connectivity problems with one or more systems involved in the online meeting, which can cause jitter or packet loss, for example, an instruction is dynamically generated and processed for causing one or more of the connected systems to transmit and/or process the online meeting content with a more reliable connection/protocol, such as a connection-oriented protocol. Codecs at the systems are used, when needed to convert speech to text with related speech attribute information and to convert text to speech.