Patent classifications
G10L15/144
Pre-training of neural network by parameter decomposition
A technique for training a neural network including an input layer, one or more hidden layers and an output layer, in which the trained neural network can be used to perform a task such as speech recognition. In the technique, a base of the neural network having at least a pre-trained hidden layer is prepared. A parameter set associated with one pre-trained hidden layer in the neural network is decomposed into a plurality of new parameter sets. The number of hidden layers in the neural network is increased by using the plurality of the new parameter sets. Pre-training for the neural network is performed.
Method and system for providing adjunct sensory information to a user
A method for providing information to a user, the method including: receiving an input signal from a sensing device associated with a sensory modality of the user; generating a preprocessed signal upon preprocessing the input signal with a set of preprocessing operations; extracting a set of features from the preprocessed signal; processing the set of features with a neural network system; mapping outputs of the neural network system to a device domain associated with a device including a distribution of haptic actuators in proximity to the user; and at the distribution of haptic actuators, cooperatively producing a haptic output representative of at least a portion of the input signal, thereby providing information to the user.
Method for training voice data set, computer device, and computer-readable storage medium
A method for training a voice data set is provided. A first test set of data selected from a first voice data set, and a first voice model parameter obtained by performing first voice model training based on a first voice data set, are obtained. Data from a second voice data set is randomly selected to generate a second test set. Further, second voice model training is performed based on the second voice data set and the first voice model parameter when the second test set and the first test set satisfy a similarity condition.
SMART DEVICE INPUT METHOD BASED ON FACIAL VIBRATION
A smart device input method based on facial vibration includes: collecting a facial vibration signal generated when a user performs voice input; extracting a Mel-frequency cepstral coefficient from the facial vibration signal; and taking the Mel-frequency cepstral coefficient as an observation sequence to obtain text input corresponding to the facial vibration signal by using a trained hidden Markov model. The facial vibration signal is collected by a vibration sensor arranged on glasses. The vibration signal is processed by: amplifying the collected facial vibration signal; transmitting the amplified facial vibration signal to the smart device via a wireless module; and intercepting a section from the received facial vibration signal as an effective portion and extracting the Mel-frequency cepstral coefficient from the effective portion by the smart device.
Masking systems and methods
Term masking is performed by generating a time-alignment value for a plurality of identifiable units of sound in vocal audio content contained in a mixed audio track, force-aligning each of the plurality of identifiable units of sound to the vocal audio content based on the time-alignment value, thereby generating a plurality of force-aligned identifiable units of sound, identifying from the plurality of force-aligned identifiable units of sound a force-aligned identifiable unit of sound to be muddled, and audio muddling the force-aligned identifiable unit of sound to be muddled.
Natural language processing with contextual data representing displayed content
Multi-modal natural language processing systems are provided. Some systems are context-aware systems that use multi-modal data to improve the accuracy of natural language understanding as it is applied to spoken language input. Machine learning architectures are provided that jointly model spoken language input (“utterances”) and information displayed on a visual display (“on-screen information”). Such machine learning architectures can improve upon, and solve problems inherent in, existing spoken language understanding systems that operate in multi-modal contexts.
MASKING SYSTEMS AND METHODS
Term masking is performed by generating a time-alignment value for a plurality of identifiable units of sound in vocal audio content contained in a mixed audio track, force-aligning each of the plurality of identifiable units of sound to the vocal audio content based on the time-alignment value, thereby generating a plurality of force-aligned identifiable units of sound, identifying from the plurality of force-aligned identifiable units of sound a force-aligned identifiable unit of sound to be muddled, and audio muddling the force-aligned identifiable unit of sound to be muddled.
MACHINE LEARNING USED TO DETECT ALIGNMENT AND MISALIGNMENT IN CONVERSATION
Digitized media is received that records a conversation between individuals. Cues are extracted from the digitized media that indicate properties of the conversation. The cues are entered as training data into a machine learning module to create a trained machine learning model. The trained machine learning model is used in a processor to detect other misalignments in subsequent digitized conversations.
Acoustic model training method, speech recognition method, apparatus, device and medium
An acoustic model training method, a speech recognition method, an apparatus, a device and a medium. The acoustic model training method comprises: performing feature extraction on a training speech signal to obtain an audio feature sequence; training the audio feature sequence by a phoneme mixed Gaussian Model-Hidden Markov Model to obtain a phoneme feature sequence; and training the phoneme feature sequence by a Deep Neural Net-Hidden Markov Model-sequence training model to obtain a target acoustic model. The acoustic model training method can effectively save time required for an acoustic model training, improve the training efficiency, and ensure the recognition efficiency.
ACOUSTIC MODEL TRAINING METHOD, SPEECH RECOGNITION METHOD, APPARATUS, DEVICE AND MEDIUM
An acoustic model training method, a speech recognition method, an apparatus, a device and a medium. The acoustic model training method comprises: performing feature extraction on a training speech signal to obtain an audio feature sequence; training the audio feature sequence by a phoneme mixed Gaussian Model-Hidden Markov Model to obtain a phoneme feature sequence; and training the phoneme feature sequence by a Deep Neural Net-Hidden Markov Model-sequence training model to obtain a target acoustic model. The acoustic model training method can effectively save time required for an acoustic model training, improve the training efficiency, and ensure the recognition efficiency.