G10L15/285

Systems and methods for voice identification and analysis
11580986 · 2023-02-14 ·

Obtaining configuration audio data including voice information for a plurality of meeting participants. Generating localization information indicating a respective location for each meeting participant. Generating a respective voiceprint for each meeting participant. Obtaining meeting audio data. Identifying a first meeting participant and a second meeting participant. Linking a first meeting participant identifier of the first meeting participant with a first segment of the meeting audio data. Linking a second meeting participant identifier of the second meeting participant with a second segment of the meeting audio data. Generating a GUI indicating the respective locations of the first and second meeting participants, and the GUI indicating a first transcription of the first segment and a second transcription of the second segment. The first transcription is associated with the first meeting participant in the GUI, and the second transcription is associated with the second meeting participant in the GUI.

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
20230045458 · 2023-02-09 · ·

An information processing apparatus of the present disclosure includes an acquisition unit that acquires respiration information indicating respiration of a user, and a determination unit that determines an operation amount regarding an operation by the user on the basis of the respiration of the user indicated by the respiration information acquired by the acquisition unit.

CALL MANAGEMENT SYSTEM AND ITS SPEECH RECOGNITION CONTROL METHOD
20180012600 · 2018-01-11 ·

A speech recognition server has a speech recognition engine, and a mode control table to hold a speech recognition mode for each call. The speech recognition engine has a mode management unit to designate a speech recognition mode for a decoder, and an output analysis unit to analyze recognition result data speech-to-text converted by speech recognition. The output analysis unit designates the speech recognition mode for the mode management unit in accordance with result of analysis of the recognition result data speech-to-text converted by the speech recognition. The mode management unit designates the speech recognition mode for the decoder in accordance with the designation with the output analysis unit. Upon speech recognition of call data, it is possible to suppress hardware resource consumption while improve users' satisfaction.

Small Footprint Multi-Channel Keyword Spotting
20230022800 · 2023-01-26 · ·

A method (800) to detect a hotword in a spoken utterance (120) includes receiving a sequence of input frames (210) characterizing streaming multi-channel audio (118). Each channel (119) of the streaming multi-channel audio includes respective audio features (510) captured by a separate dedicated microphone (107). For each input frame, the method includes processing, using a three-dimensional (3D) single value decomposition filter (SVDF) input layer (302) of a memorized neural network (300), the respective audio features of each channel in parallel and generating a corresponding multi-channel audio feature representation (420) based on a concatenation of the respective audio features (344). The method also includes generating, using sequentially-stacked SVDF layers (350), a probability score (360) indicating a presence of a hotword in the audio. The method also includes determining whether the probability score satisfies a threshold and, when satisfied, initiating a wake-up process on a user device (102).

Hotword detection on multiple devices
11557299 · 2023-01-17 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for hotword detection on multiple devices are disclosed. In one aspect, a method includes the actions of receiving, by a first computing device, audio data that corresponds to an utterance. The actions further include determining a first value corresponding to a likelihood that the utterance includes a hotword. The actions further include receiving a second value corresponding to a likelihood that the utterance includes the hotword, the second value being determined by a second computing device. The actions further include comparing the first value and the second value. The actions further include based on comparing the first value to the second value, initiating speech recognition processing on the audio data.

GUIDANCE QUERY FOR CACHE SYSTEM
20230223027 · 2023-07-13 ·

A device may be configured to determine whether an audio file is a first type of audio file that is capable of being processed to recognize the voice query based on a characteristic of the audio file itself or a second type of audio file that may require speech recognition processing in order to recognize the voice query associated with the audio file. In determining whether the audio file is a first type of audio file or a second type of audio file, a query filter associated with the device may be configured to access one or more guidance queries. Using the one or more guidance queries, the device may classify the audio file as a first type of audio file or a second type of audio file based on receiving only a portion of the audio file, thereby improving the speed at which the audio file can be processed.

Electronic apparatus for dynamic note matching and operating method of the same

Disclosed are an electronic apparatus for dynamic note matching (DNM) and an operating method thereof, the method including acquiring a first section sequence by reducing a first sequence extracted from an input signal based on at least one first section in which the respective values are successively arranged; acquiring a second section sequence reduced from a pre-stored second sequence based on at least one second section in which the respective values are successively arranged; and calculating a similarity between the first section sequence and the second section sequence.

Zero latency digital assistant

An electronic device can implement a zero-latency digital assistant by capturing audio input from a microphone and using a first processor to write audio data representing the captured audio input to a memory buffer. In response to detecting a user input while capturing the audio input, the device can determine whether the user input meets a predetermined criteria. If the user input meets the criteria, the device can use a second processor to identify and execute a task based on at least a portion of the contents of the memory buffer.

Hotphrase triggering based on a sequence of detections
11694685 · 2023-07-04 · ·

A method includes receiving audio data corresponding to an utterance spoken by the user and captured by the user device. The utterance includes a command for a digital assistant to perform an operation. The method also includes determining, using a hotphrase detector configured to detect each trigger word in a set of trigger words associated with a hotphrase, whether any of the trigger words in the set of trigger words are detected in the audio data during the corresponding fixed-duration time window. The method also includes determining identifying, in the audio corresponding to the utterance, the hotphrase when each other trigger word in the set of trigger words was also detected in the audio data. The method also includes triggering an automated speech recognizer to perform speech recognition on the audio data when the hotphrase is identified in the audio data corresponding to the utterance.

Electronic device and method for controlling the electronic device

Disclosed are an electronic device capable of efficiently performing speech recognition and natural language understanding and a method for controlling thereof. The electronic device includes: a microphone; a non-volatile memory configured to store virtual assistant model data comprising data that is classified according to a plurality of domains and data that is commonly used for the plurality of domains; a volatile memory; and a processor configured to: based on receiving, through the microphone, a trigger input to perform speech recognition for a user speech, initiate loading the virtual assistant model data from the non-volatile memory into the volatile memory, load, into the volatile memory, first data from among the data classified according to the plurality of domains and, while loading the first data into the volatile memory, load at least a part of the data commonly used for the plurality of domains into the volatile memory.