Patent classifications
G10L2015/081
Information processing device and setting device
A control section includes: an identifying section configured to, by referring to one or more search keywords set for one or more control targets, identify at least one search keyword from among the one or more search keywords, the at least one search keyword matching any of one or more main words contained in input data acquired through voice input; and a selecting section configured to select at least one control target from among the one or more control targets based on one or more numeric values obtained through calculation of one or more expressions each of which batch-converts, into numerical form, one or more of the one or more search keywords set for the one or more control targets.
METHOD OF RECOGNISING A SOUND EVENT
A method for recognising at least one of a non-verbal sound event and a scene in an audio signal comprising a sequence of frames of audio data, the method comprising: for each frame of the sequence: processing the frame of audio data to extract multiple acoustic features for the frame of audio data; and classifying the acoustic features to classify the frame by determining, for each of a set of sound classes, a score that the frame represents the sound class; processing the sound class scores for multiple frames of the sequence of frames to generate, for each frame, a sound class decision for each frame; and processing the sound class decisions for the sequence of frames to recognise the at least one of a non-verbal sound event and a scene.
METHOD AND APPARATUS FOR GENERATING SPEECH
A speech generation method and apparatus are disclosed. The speech generation method includes obtaining, by a processor, a linguistic feature and a prosodic feature from an input text, determining, by the processor, a first candidate speech element through a cost calculation and a Viterbi search based on the linguistic feature and the prosodic feature, generating, at a speech element generator implemented at the processor, a second candidate speech element based on the linguistic feature or the prosodic feature and the first candidate speech element, and outputting, by the processor, an output speech by concatenating the second candidate speech element and a speech sequence determined through the Viterbi search.
Neural network method and apparatus
A method and apparatus for training a recognition model and a recognition method and apparatus using the model are disclosed. The apparatus for training the model obtains an estimation hidden vector output from a hidden layer of the model in response to an estimation output vector output from the model at a previous time being input into the model at a current time, and trains the model such that the estimation hidden vector of the current time matches an answer hidden vector output from the hidden layer in response to an answer output vector, corresponding to the estimation output vector of the previous time, being input into the model at the current time.
DEEP LEARNING INTERNAL STATE INDEX-BASED SEARCH AND CLASSIFICATION
Systems and methods are disclosed for generating internal state representations of a neural network during processing and using the internal state representations for classification or search. In some embodiments, the internal state representations are generated from the output activation functions of a subset of nodes of the neural network. The internal state representations may be used for classification by training a classification model using internal state representations and corresponding classifications. The internal state representations may be used for search, by producing a search feature from an search input and comparing the search feature with one or more feature representations to find the feature representation with the highest degree of similarity.
Generating summaries and insights from meeting recordings
One embodiment of the present invention sets forth a technique for generating a summary of a recording. The technique includes generating an index associated with the recording, wherein the index identifies a set of terms included in the recording and, for each term in the set of terms, a corresponding location of the term in the recording. The technique also includes determining categories of predefined terms to be identified in the index and identifying a first subset of the terms in the index that match a first portion of the predefined terms in the categories. The technique further includes outputting a summary of the recording comprising the locations of the first subset of terms in the recording and listings of the first subset of terms under one or more corresponding categories.
Deep learning internal state index-based search and classification
Systems and methods are disclosed for generating internal state representations of a neural network during processing and using the internal state representations for classification or search. In some embodiments, the internal state representations are generated from the output activation functions of a subset of nodes of the neural network. The internal state representations may be used for classification by training a classification model using internal state representations and corresponding classifications. The internal state representations may be used for search, by producing a search feature from an search input and comparing the search feature with one or more feature representations to find the feature representation with the highest degree of similarity.
PERFORMING SPEECH RECOGNITION USING A LOCAL LANGUAGE CONTEXT INCLUDING A SET OF WORDS WITH DESCRIPTIONS IN TERMS OF COMPONENTS SMALLER THAN THE WORDS
A method of a local recognition system controlling a host device to perform one or more operations is provided. The method includes receiving, by the local recognition system, a query, performing speech recognition on the received query by implementing, by the local recognition system, a local language context comprising a set of words comprising descriptions in terms of components smaller than the words, and performing speech recognition, using the local language context, to create a transcribed query. Further, the method includes controlling the host device in dependence upon the speech recognition performed on the transcribed query.
END-TO-END NEURAL NETWORKS FOR SPEECH RECOGNITION AND CLASSIFICATION
Systems and methods are disclosed for end-to-end neural networks for speech recognition and classification and additional machine learning techniques that may be used in conjunction or separately. Some embodiments comprise multiple neural networks, directly connected to each other to form an end-to-end neural network. One embodiment comprises a convolutional network, a first fully-connected network, a recurrent network, a second fully-connected network, and an output network. Some embodiments are related to generating speech transcriptions, and some embodiments relate to classifying speech into a number of classifications.
End-to-end neural networks for speech recognition and classification
Systems and methods are disclosed for end-to-end neural networks for speech recognition and classification and additional machine learning techniques that may be used in conjunction or separately. Some embodiments comprise multiple neural networks, directly connected to each other to form an end-to-end neural network. One embodiment comprises a convolutional network, a first fully-connected network, a recurrent network, a second fully-connected network, and an output network. Some embodiments are related to generating speech transcriptions, and some embodiments relate to classifying speech into a number of classifications.