Patent classifications
G10L15/075
Method and apparatus for evaluating user intention understanding satisfaction, electronic device and storage medium
A method and apparatus for generating a user intention understanding satisfaction evaluation model, a method and apparatus for evaluating a user intention understanding satisfaction, an electronic device and a storage medium are provided, relating to intelligent voice recognition and knowledge graphs. The method for generating a user intention understanding satisfaction evaluation model is: acquiring a plurality of sets of intention understanding data, at least one set of which comprises a plurality of sequences corresponding to multi-round behaviors of an intelligent device in multi-round man-machine interactions; and learning the plurality of sets of intention understanding data through a first machine learning model, to obtain the user intention understanding satisfaction evaluation model after the learning, wherein the user intention understanding satisfaction evaluation model is configured to evaluate user intention understanding satisfactions of the intelligent device in the multi-round man-machine interactions according to the plurality of sequences corresponding to the multi-round man-machine interactions.
Methods and systems for predicting non-default actions against unstructured utterances
A method to adaptively predict non-default actions against unstructured utterances by an automated assistant operating in a computing-system is provided. The method includes extracting voice-features based on receiving an input utterance from at-least one speaker by an automatic speech recognition (ASR) device, identifying the input utterance as an unstructured utterance based on the extracted voice-features and a mapping between the input utterance with one or more default actions as drawn by the ASR, obtaining at least one probable action to be performed in response to the unstructured utterance through a dynamic bayesian network (DBN). The method further includes providing the at least one probable action obtained by the DBN to the speaker in an order of the posterior probability with respect to each action.
Customizable speech recognition system
Methods and systems are provided for generating a customized speech recognition neural network system comprised of an adapted automatic speech recognition neural network and an adapted language model neural network. The automatic speech recognition neural network is first trained in a generic domain and then adapted to a target domain. The language model neural network is first trained in a generic domain and then adapted to a target domain. Such a customized speech recognition neural network system can be used to understand input vocal commands.
Gathering user's speech samples
Disclosed is gathering a user's speech samples. According to an embodiment of the disclosure, a method of gathering learning samples may gather a speaker's speech data obtained while talking on a mobile terminal and text data generated from the speech data and gather training data for generating a speech synthesis model. According to the disclosure, the method of gathering learning samples may be related to artificial intelligence (AI) modules, unmanned aerial vehicles (UAVs), robots, augmented reality (AR) devices, virtual reality (VR) devices, and 5G service-related devices.
Filtering directive invoking vocal utterances
Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: receiving, from a user, voice data defining a candidate directive invoking vocal utterance for invoking a directive to execute a first text based command to perform a first computer function of a computer system, wherein the candidate directive invoking vocal utterance includes at least one word or phrase of the text based command, wherein the computer system is configured to perform the first computer function in response to the first text based command and wherein the computer system is configured to perform a second computer function in response to a second text based command; determining, based on machine logic, whether a word or phrase of the candidate vocal utterance sounds confusingly similar to a speech rendering of a word or phrase defining the second text based command.
Method and apparatus for correcting failures in automated speech recognition systems
Systems and methods are disclosed and described for correcting errors in ASR transcriptions. For an incorrect transcription, different words or phrases from the transcription, and/or related words or phrases, are submitted as hint words to the ASR system, and the voice query is submitted again, to determine new transcriptions. This process is repeated with different transcription terms, until a different and more proper transcription is generated. This increases the accuracy of ASR systems.
Streaming action fulfillment based on partial hypotheses
A method for streaming action fulfillment receives audio data corresponding to an utterance where the utterance includes a query to perform an action that requires performance of a sequence of sub-actions in order to fulfill the action. While receiving the audio data, but before receiving an end of speech condition, the method processes the audio data to generate intermediate automated speech recognition (ASR) results, performs partial query interpretation on the intermediate ASR results to determine whether the intermediate ASR results identify an application type needed to perform the action and, when the intermediate ASR results identify a particular application type, performs a first sub-action in the sequence of sub-actions by launching a first application to execute on the user device where the first application is associated with the particular application type. The method, in response to receiving an end of speech condition, fulfills performance of the action.
Apparatus and method for providing voice assistant service
Provided are an electronic device and method for providing a voice assistant service. The method, performed by the electronic device, of providing the voice assistant service includes: obtaining a voice of a user; obtaining voice analysis information of the voice of the user by inputting the voice of the user to a natural language understanding model; determining whether a response operation with respect to the voice of the user is performable, according to a preset criterion, based on the obtained voice analysis information; and based on the determining that the response operation is not performable, outputting a series of guide messages for learning the response operation related to the voice of the user.
System for creating speaker model based on vocal sounds for a speaker recognition system, computer program product, and controller, using two neural networks
According to one embodiment, a system for creating a speaker model includes one or more processors. The processors change a part of network parameters from an input layer to a predetermined intermediate layer based on a plurality of patterns and inputs a piece of speech into each of neural networks so as to obtain a plurality of outputs from the intermediate layer. The part of network parameters of the each of the neural networks is changed based on one of the plurality of patterns. The processors create a speaker model with respect to one or more words detected from the speech based on the outputs.
SPEECH RECOGNITION SYSTEMS AND METHODS
A speech processing system and a method therefor is provided. The speech processing system may capture one or more speech signals. Each of the one or more speech signals may include at least one dialogue uttered by a user. Dialogues may be extracted from the one or more speech signals. Frequently uttered dialogues may be identified over a period of time. The frequently uttered dialogues may be a set of dialogues that are uttered by the user a number of times during the period of time more than other dialogues uttered by the user during the period of time. A local language model and a local acoustic model may be generated based on, at least in part, the frequently uttered dialogues. The one or more speech signals may be processed based on, at least in part, the local language model and the local acoustic model.