G10L15/1815

REDUCING THE NEED FOR MANUAL START/END-POINTING AND TRIGGER PHRASES
20180012596 · 2018-01-11 ·

Systems and processes for selectively processing and responding to a spoken user input are provided. In one example, audio input containing a spoken user input can be received at a user device. The spoken user input can be identified from the audio input by identifying start and end-points of the spoken user input. It can be determined whether or not the spoken user input was intended for a virtual assistant based on contextual information. The determination can be made using a rule-based system or a probabilistic system. If it is determined that the spoken user input was intended for the virtual assistant, the spoken user input can be processed and an appropriate response can be generated. If it is instead determined that the spoken user input was not intended for the virtual assistant, the spoken user input can be ignored and/or no response can be generated.

Contextual biasing of neural language models using metadata from a natural language understanding component and embedded recent history

Techniques for implementing a chatbot that utilizes context embeddings are described. An exemplary method includes determining a next turn by: applying a language model to the utterance to determine a probability of a sequence of words, generating a context embedding for the utterance based at least on one or more of: a dialog act as defined by a chatbot definition of the chatbot, a topic vector identifying a domain of the chatbot, a previous chatbot response, and one or more slot options; performing neural language model rescoring using the determined probability of a sequence of words as a word embedding and the generated context embedding to predict an hypothesis; determining at least a name of a slot and type to be fulfilled based at least in part on the hypothesis and the chatbot definition; and determining a next turn based at least in part on the chatbot definition, any previous state, and the name of the slot and type to be fulfilled.

Machine learning dataset generation using a natural language processing technique

A server can receive a plurality of records at a databases such that each record is associated with a phone call and includes at least one request generated based on a transcript of the phone call. The server can generate a training dataset based on the plurality of records. The server can further train a binary classification model using the training dataset. Next, the server can receive a live transcript of a phone call in progress. The server can generate at least one live request based on the live transcript using a natural language processing module of the server. The server can provide the at least one live request to the binary classification model as input to generate a prediction. Lastly, the server can transmit the prediction to an entity receiving the phone call in progress. The prediction can cause a transfer of the call to a chatbot.

Contextualized speech to text conversion

Methods, computer program products, and systems are presented. The methods, computer program products, and systems can include, for instance: determining, in performance of an interactive voice response (IVR) session, prompting data for presenting to a user, and storing text based data defining the prompting data into a data repository; presenting the prompting data to the user; receiving return voice string data from the user in response to the prompting data; generating a plurality of candidate text strings associated to the return voice string of the user; examining the text based data defining the prompting data; augmenting the plurality of candidate text strings in dependence on a result of the examining to provide a plurality of augmented candidate text strings associated to the return voice string data; and evaluating respective ones of the plurality of augmented candidate text strings associated to the return voice string data; and selecting one of the augmented candidate text strings as a returned transcription associated to the return voice string data.

SIMPLE AFFIRMATIVE RESPONSE OPERATING SYSTEM
20180012595 · 2018-01-11 ·

A simple affirmative response operating system for selecting a data item from a list of options using a unique affirmative action. Text-based labels in a listing of content are converted to speech using an embedded text-to-speech engine and an audio output of a first converted label is provided. A listening state is entered into for a predefined pause time to await receipt of the simple affirmative action. If the simple affirmative action is performed during the predefined pause time, an associated content item is selected for output. If the simple affirmative action is not performed during the predefined pause time, an audio output of a next converted label in the list is provided. This protocol may be used to control a variety of computing devices safely and efficiently while a user is distracted or disabled from using traditional input methods.

Removal of identifying traits of a user in a virtual environment

A virtual environment platform may receive, from a user device, a request to access a virtual reality (VR) environment and may verify, based on the request, a user of the user device to allow the user device access to the VR environment. The virtual environment platform may receive, after verifying the user of the user device, user voice input and user handwritten input from the user device. The virtual environment platform may generate processed user speech by processing the user voice input, wherein a characteristic of the processed user speech and a corresponding characteristic of the user voice input are different and may generate formatted user text by processing the user handwritten input, wherein the formatted user text is machine-encoded text. The virtual environment platform may cause the processed user speech to be audibly presented and the formatted user text to be visually presented in the VR environment.

Phonetic comparison for virtual assistants

In an approach for optimizing an intelligent virtual assistant by using phonetic comparison to find a response stored in a local database, a processor receives an audio input on a computing device. A processor transcribes the audio input to text. A processor compares the text to a set of user queries and commands in a local database of the computing device using a phonetic algorithm. A processor determines whether a user query or command of the set of user queries and commands meets a pre-defined threshold of similarity. Responsive to determining that the user query or command meets the pre-defined threshold of similarity, a processor identifies an intention of a set of intentions stored in the local database corresponding to the user query or command. A processor identifies a response of a set of responses in the local database corresponding to the intention. A processor outputs the response audibly.

Electronic device and method for providing conversational service

A method, performed by an electronic device, of providing a conversational service includes: receiving an utterance input; identifying a temporal expression representing a time in a text obtained from the utterance input; determining a time point related to the utterance input based on the temporal expression; selecting a database corresponding to the determined time point from among a plurality of databases storing information about a conversation history of a user using the conversational service; interpreting the text based on information about the conversation history of the user, the conversation history information being acquired from the selected database; generating a response message to the utterance input based on a result of the interpreting; and outputting the generated response message.

SEMIAUTOMATED RELAY METHOD AND APPARATUS
20230005484 · 2023-01-05 ·

A relay for captioning a hearing user's (HU's) voice signal during a phone call between an HU and a hearing assisted user (AU), the HU using an HU device and the AU using an AU device where the HU voice signal is transmitted from the HU device to the AU device, the relay comprising a display screen, a processor linked to the display and programmed to perform the steps of receiving the HU voice signal from the AU device, transmitting the HU voice signal to a remote automatic speech recognition (ASR) server running ASR software that converts the HU voice signal to ASR generated text, the remote ASR server located at a remote location from the relay, receiving the ASR generated text from the ASR server, present the ASR generated text for viewing by a call assistant (CA) via the display and transmitting the ASR generated text to the AU device.

INTELLIGENT SUNSHADING LOUVER CONTROL SYSTEM AND METHOD BASED ON GESTURE RECOGNITION
20230236669 · 2023-07-27 ·

Sunshading louver control system and method based on gesture recognition are provided. Indoor images are collected in real time. Based on the indoor images, an action region of gesture motion is positioned and feature extraction is performed thereon to obtain hand motion parameters. The hand motion parameters are analyzed to obtain gesture information. The gesture information is compared with preset gesture information. When the preset gesture information contains the gesture information, corresponding adjustment is performed on a sunshading louver according to a preset operation logic corresponding to the gesture information. When the preset gesture information does not contain the gesture information, the gesture information is determined to be invalid. The lifting and rotation angles of the louver can be automatically controlled according to various gestures of indoor personnel, thereby rapidly realizing regulation of indoor illumination and natural ventilation quantity, and meeting the change of personnel's demand for indoor environment.