G10L2015/088

DEVICE INCLUDING SPEECH RECOGNITION FUNCTION AND METHOD OF RECOGNIZING SPEECH
20180005627 · 2018-01-04 ·

A device including a speech recognition function which recognizes speech from a user, includes: a loudspeaker which outputs speech to a space; a microphone which collects speech in the space; a first speech recognition unit which recognizes the speech collected by the microphone; a command control unit which issues a command for controlling the device, based on the speech recognized by the first speech recognition unit; and a control unit which prohibits the command issuance unit from issuing the command, based on the speech to be output from the loudspeaker.

PERFORMING TASKS AND RETURING AUDIO AND VISUAL ANSWERS BASED ON VOICE COMMAND
20180005631 · 2018-01-04 · ·

An artificial intelligence voice interactive system may provide various services to a user in response to a voice command by providing an interface between the system and a legacy system to enable providing various types of existing services in response to user speech without modifying systems for the existing services. Such system includes a central server, and the central server may perform operations of registering a plurality of service servers at the central server and storing registration information of each service server, analyzing voice command data from the user device and determining at least one task and corresponding service servers based on the analysis results, generating an instruction message based on the voice command data, the determined at least one task, and the registration information of the selected service servers, and transmitting the generated instruction message to the selected service servers, and receiving task results including audio and video data from the selected service servers and outputting the task results through at least one device associated with the user device.

RECORDING SYSTEM FOR GENERATING A TRANSCRIPT OF A DIALOGUE
20180012619 · 2018-01-11 · ·

A recording system has a listener processor for automatically capturing events involving computer applications during a dialogue involving the user of the computer. The system generates a visual transcript of events on a timeline. It automatically detects start of a dialogue and proceeds to detect events and determines if they are configured as transcript events, before detecting end of the dialogue. The system may associate dialogue events with audio clips, using meta tags.

SPEAKER VERIFICATION USING CO-LOCATION INFORMATION
20180012604 · 2018-01-11 ·

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying a user in a multi-user environment. One of the methods includes receiving, by a first user device, an audio signal encoding an utterance, obtaining, by the first user device, a first speaker model for a first user of the first user device, obtaining, by the first user device for a second user of a second user device that is co-located with the first user device, a second speaker model for the second user or a second score that indicates a respective likelihood that the utterance was spoken by the second user, and determining, by the first user device, that the utterance was spoken by the first user using (i) the first speaker model and the second speaker model or (ii) the first speaker model and the second score.

Contextual assistant using mouse pointing or touch cues
11709653 · 2023-07-25 · ·

A method for a contextual assistant to use mouse pointing or touch cues includes receiving audio data corresponding to a query spoken by a user, receiving, in a graphical user interface displayed on a screen, a user input indication indicating a spatial input applied at a first location on the screen, and processing the audio data to determine a transcription of the query. The method also includes performing query interpretation on the transcription to determine that the query is referring to an object displayed on the screen without uniquely identifying the object, and requesting information about the object. The method further includes disambiguating, using the user input indication indicating the spatial input applied at the first location on the screen, the query to uniquely identify the object that the query is referring to, obtaining the information about the object requested by the query, and providing a response to the query.

Natural assistant interaction

Systems and processes for operating a virtual assistant to provide natural assistant interaction are provided. In accordance with one or more examples, a method includes, at an electronic device with one or more processors and memory: receiving a first audio stream including one or more utterances; determining whether the first audio stream includes a lexical trigger; generating one or more candidate text representations of the one or more utterances; determining whether at least one candidate text representation of the one or more candidate text representations is to be disregarded by the virtual assistant. If at least one candidate text representation is to be disregarded, one or more candidate intents are generated based on candidate text representations of the one or more candidate text representations other than the to be disregarded at least one candidate text representation.

Audio data processing method, apparatus and storage medium for detecting wake-up words based on multi-path audio from microphone array

An audio data processing method is provided. The method includes: obtaining multi-path audio data in an environmental space, obtaining a speech data set based on the multi-path audio data, and separately generating, in a plurality of enhancement directions, enhanced speech information corresponding to the speech data set; matching a speech hidden feature in the enhanced speech information with a target matching word, and determining an enhancement direction corresponding to the enhanced speech information having a highest degree of matching with the target matching word as a target audio direction; obtaining speech spectrum features in the enhanced speech information, and obtaining, from the speech spectrum features, a speech spectrum feature in the target audio direction; and performing speech authentication on the speech hidden feature and the speech spectrum feature that are in the target audio direction based on the target matching word, to obtain a target authentication result.

ROBOT CONTROL DEVICE, ROBOT, ROBOT CONTROL METHOD, AND PROGRAM RECORDING MEDIUM
20180009118 · 2018-01-11 · ·

Disclosed are a robot control device and the like with which the accuracy with which a robot starts listening to speech is improved, without requiring a user to perform an operation. This robot control device is provided with: an action executing means which, upon detection of a person, determines an action to be executed with respect to said person, and performs control in such a way that a robot executes the action; an assessing means which, upon detection of a reaction from the person in response to the action determined by the action executing means, assesses the possibility that the person will talk to the robot, on the basis of the reaction; and an operation control means which controls an operating mode of the robot main body on the basis of the result of the assessment performed by the assessing means.

ENHANCED SPEECH ENDPOINTING
20180012591 · 2018-01-11 ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated by the context data.

KEYWORD DETECTION MODELING USING CONTEXTUAL INFORMATION

Features are disclosed for detecting words in audio using contextual information in addition to automatic speech recognition results. A detection model can be generated and used to determine whether a particular word, such as a keyword or “wake word,” has been uttered. The detection model can operate on features derived from an audio signal, contextual information associated with generation of the audio signal, and the like. In some embodiments, the detection model can be customized for particular users or groups of users based usage patterns associated with the users.