Patent classifications
G10L19/00
METHOD FOR OUTPUTTING BLEND SHAPE VALUE, STORAGE MEDIUM, AND ELECTRONIC DEVICE
A method for outputting a blend shape value includes: performing feature extraction on obtained target audio data to obtain a target audio feature vector; inputting the target audio feature vector and a target identifier into an audio-driven animation model; inputting the target audio feature vector into an audio encoding layer, determining an input feature vector of a next layer at a (2t−n)/2 time point based on an input feature vector of a previous layer between a t time point and a t-n time point, determining a feature vector having a causal relationship with the input feature vector of the previous layer as a valid feature vector, outputting sequentially target-audio encoding features, and inputting the target identifier into a one-hot encoding layer for binary vector encoding to obtain a target-identifier encoding feature; and outputting a blend shape value corresponding to the target audio data.
METHOD FOR OUTPUTTING BLEND SHAPE VALUE, STORAGE MEDIUM, AND ELECTRONIC DEVICE
A method for outputting a blend shape value includes: performing feature extraction on obtained target audio data to obtain a target audio feature vector; inputting the target audio feature vector and a target identifier into an audio-driven animation model; inputting the target audio feature vector into an audio encoding layer, determining an input feature vector of a next layer at a (2t−n)/2 time point based on an input feature vector of a previous layer between a t time point and a t-n time point, determining a feature vector having a causal relationship with the input feature vector of the previous layer as a valid feature vector, outputting sequentially target-audio encoding features, and inputting the target identifier into a one-hot encoding layer for binary vector encoding to obtain a target-identifier encoding feature; and outputting a blend shape value corresponding to the target audio data.
AUTOMATED DOMAIN-SPECIFIC CONSTRAINED DECODING FROM SPEECH INPUTS TO STRUCTURED RESOURCES
Methods, systems, and computer program products for automated domain-specific constrained decoding from speech inputs to structured resources are provided herein. A computer-implemented method includes converting at least a portion of at least one user-provided speech utterance into text by processing the at least one user-provided speech utterance using an artificial intelligence-based automatic speech recognition model; automatically training an artificial intelligence-based decoding engine, wherein automatically training the artificial intelligence-based decoding engine comprising constraining the artificial intelligence-based decoding engine based at least in part on a domain-specific model and the artificial intelligence-based automatic speech recognition model; and generating at least one of one or more domain-specific text outputs related to one or more structured resources associated with the domain and one or more domain-specific action outputs related to the one or more structured resources associated with the domain by processing at least a portion of the text using the artificial intelligence-based decoding engine.
AUTOMATED DOMAIN-SPECIFIC CONSTRAINED DECODING FROM SPEECH INPUTS TO STRUCTURED RESOURCES
Methods, systems, and computer program products for automated domain-specific constrained decoding from speech inputs to structured resources are provided herein. A computer-implemented method includes converting at least a portion of at least one user-provided speech utterance into text by processing the at least one user-provided speech utterance using an artificial intelligence-based automatic speech recognition model; automatically training an artificial intelligence-based decoding engine, wherein automatically training the artificial intelligence-based decoding engine comprising constraining the artificial intelligence-based decoding engine based at least in part on a domain-specific model and the artificial intelligence-based automatic speech recognition model; and generating at least one of one or more domain-specific text outputs related to one or more structured resources associated with the domain and one or more domain-specific action outputs related to the one or more structured resources associated with the domain by processing at least a portion of the text using the artificial intelligence-based decoding engine.
Low-complexity tonality-adaptive audio signal quantization
The invention provides an audio encoder for encoding an audio signal so as to produce therefrom an encoded signal, the audio encoder including: a framing device configured to extract frames from the audio signal; a quantizer configured to map spectral lines of a spectrum signal derived from the frame of the audio signal to quantization indices, wherein the quantizer has a dead-zone, in which the input spectral lines are mapped to quantization index zero; and a control device configured to modify the dead-zone; wherein the control device includes a tonality calculating device configured to calculate at least one tonality indicating value for at least one spectrum line or for at least one group of spectral lines, wherein the control device is configured to modify the dead-zone for the at least one spectrum line or the at least one group of spectrum lines depending on the respective tonality indicating value.
Coding device, decoding device, and method and program thereof
A coding method and a decoding method are provided which can use in combination a predictive coding and decoding method which is a coding and decoding method that can accurately express coefficients which are convertible into linear prediction coefficients with a small code amount and a coding and decoding method that can obtain correctly, by decoding, coefficients which are convertible into linear prediction coefficients of the present frame if a linear prediction coefficient code of the present frame is correctly input to a decoding device. A coding device includes: a predictive coding unit that obtains a first code by coding a differential vector formed of differentials between a vector of coefficients which are convertible into linear prediction coefficients of more than one order of the present frame and a prediction vector containing at least a predicted vector from a past frame, and obtains a quantization differential vector corresponding to the first code; and a non-predictive coding unit that generates a second code by coding a correction vector which is formed of differentials between the vector of the coefficients which are convertible into the linear prediction coefficients of more than one order of the present frame and the quantization differential vector or formed of some of elements of the differentials.
Identifier
A computer device (100), configured to encode identifiers by providing audio identifiers therefrom, is described. The computer device (100) is configured to provide a set of audio signals as respective bitstreams. Each audio signal of the set of audio signals is defined based, at least in part, on audio signal information including at least one of a type, a fundamental frequency, a time signature and a time. Each audio signal comprises a set of audio segments. Each audio segment of the set of audio segments is defined based, at least in part, on audio segment information including at least one of a frequency, an amplitude, a transform, a time duration and an envelope. The computer device (100) is configured to receive an identifier and select a subset of audio signals from the set of audio signals according to the received identifier based, at least in part, on the audio signal information and/or the audio segment information. The computer device (100) is configured to process the audio selected subset of audio signals by combining the selected subset of audio signals to provide an audio identifier. The computer device (100) is configured to output the audio identifier in an output audio signal as an output bitstream, wherein the audio identifier encodes the identifier. Also described is a method of encoding identifiers by providing audio identifiers therefrom.
Identifier
A computer device (100), configured to encode identifiers by providing audio identifiers therefrom, is described. The computer device (100) is configured to provide a set of audio signals as respective bitstreams. Each audio signal of the set of audio signals is defined based, at least in part, on audio signal information including at least one of a type, a fundamental frequency, a time signature and a time. Each audio signal comprises a set of audio segments. Each audio segment of the set of audio segments is defined based, at least in part, on audio segment information including at least one of a frequency, an amplitude, a transform, a time duration and an envelope. The computer device (100) is configured to receive an identifier and select a subset of audio signals from the set of audio signals according to the received identifier based, at least in part, on the audio signal information and/or the audio segment information. The computer device (100) is configured to process the audio selected subset of audio signals by combining the selected subset of audio signals to provide an audio identifier. The computer device (100) is configured to output the audio identifier in an output audio signal as an output bitstream, wherein the audio identifier encodes the identifier. Also described is a method of encoding identifiers by providing audio identifiers therefrom.
Audio processing in a low-bandwidth networked system
The present disclosure is generally directed a system to detect activation phrases within input audio signals transmitted over a low-bandwidth network. The system can use a two-stage activation phrase detection process. First a sensing device, which can include a plurality of microphones for detecting an input audio signal, can detect an input audio signal that includes a candidate activation phrase. Second, the sensing device can transmit the recordings of the input audio signal to a client device for confirmation that the input audio signal includes the activation phrase.
Systems and methods for real-time complex character animations and interactivity
Systems, methods, and non-transitory computer-readable media can identify a virtual character being presented to a user within a real-time immersive environment. A first animation to be applied to the virtual character is determined. A nonverbal communication animation to be applied to the virtual character simultaneously with the first animation is determined. The virtual character is animated in real-time based on the first animation and the nonverbal communication animation.