Patent classifications
G10L15/30
SYSTEM AND METHOD FOR CONTROLLING A PLURALITY OF DEVICES
Provided is a system and method for controlling a plurality of devices. The method includes generating a command script by processing a text string with at least one model, the text string including a natural language input by a user, modifying the command script based on contextual data, the command script including a configuration for at least one device, generating at least one command signal based on the command script, and controlling at least one device based on the at least one command signal.
INFORMATION COMMUNICATION SYSTEM AND METHOD FOR CONTROLLING TERMINAL
An in-flight announcement system (500) includes a voice input device (300) that inputs voice, a central control device (100) that translates an announcement content based on the input voice and outputs a modulation signal, a lighting device (400) that emits light based on the input modulation signal, and a receiving terminal (200) that specifies the announcement content based on an input light signal and outputs a translation result. The central control device (100) includes a recognition unit (101) that recognizes the voice information as utterance information, a determination unit (102) that determines whether the utterance information is a fixed sentence or not and outputs identification information of the fixed sentence, a text information group (103) including text information necessary for the determination unit (102) to determine and obtain the identification information, a translation unit (104) that translates the utterance information that is not determined as the fixed sentence and outputs reference information for the translation result, a storage (105) for storing the translation result, a generation unit (106) that generates a data set, a conversion unit (107) that generates the modulation signal based on the data set, and a moving body information management unit (108) that stores various information of an aircraft.
INFORMATION COMMUNICATION SYSTEM AND METHOD FOR CONTROLLING TERMINAL
An in-flight announcement system (500) includes a voice input device (300) that inputs voice, a central control device (100) that translates an announcement content based on the input voice and outputs a modulation signal, a lighting device (400) that emits light based on the input modulation signal, and a receiving terminal (200) that specifies the announcement content based on an input light signal and outputs a translation result. The central control device (100) includes a recognition unit (101) that recognizes the voice information as utterance information, a determination unit (102) that determines whether the utterance information is a fixed sentence or not and outputs identification information of the fixed sentence, a text information group (103) including text information necessary for the determination unit (102) to determine and obtain the identification information, a translation unit (104) that translates the utterance information that is not determined as the fixed sentence and outputs reference information for the translation result, a storage (105) for storing the translation result, a generation unit (106) that generates a data set, a conversion unit (107) that generates the modulation signal based on the data set, and a moving body information management unit (108) that stores various information of an aircraft.
VOICE RECOGNITION USING ACCELEROMETERS FOR SENSING BONE CONDUCTION
Voice command recognition and natural language recognition are carried out using an accelerometer that senses signals from the vibrations of one or more bones of a user and receives no audio input. Since word recognition is made possible using solely the signal from the accelerometer from a person's bone conduction as they speak, an acoustic microphone is not needed and thus not used to collect data for word recognition. According to one embodiment, a housing contains an accelerometer and a processor, both within the same housing. The accelerometer is preferably a MEMS accelerometer which is capable of sensing the vibrations that are present in the bone of a user as the user is speaking words. A machine learning algorithm is applied to the collected data to correctly recognize words spoken by a person with significant difficulties in creating audible language.
Voice trigger for a digital assistant
A method for operating a voice trigger is provided. In some implementations, the method is performed at an electronic device including one or more processors and memory storing instructions for execution by the one or more processors. The method includes receiving a sound input. The sound input may correspond to a spoken word or phrase, or a portion thereof. The method includes determining whether at least a portion of the sound input corresponds to a predetermined type of sound, such as a human voice. The method includes, upon a determination that at least a portion of the sound input corresponds to the predetermined type, determining whether the sound input includes predetermined content, such as a predetermined trigger word or phrase. The method also includes, upon a determination that the sound input includes the predetermined content, initiating a speech-based service, such as a voice-based digital assistant.
Communication transfer between devices
A method may include obtaining an indicator that a first device is in a location of a second device and in response to obtaining the indicator, sending a redirect request to a communication service provider of the first device to direct, to the second device, incoming communication requests handled by the communication service provider that are directed to the first device. The method may further include after sending the redirect request and after a communication request to a communication session is directed to the first device, obtaining, at the second device, a communication indication to participate in the communication session. The method may further include directing audio of the communication session to a transcription system and obtaining, at the second device, the transcription of the audio from the transcription system. The method may also include presenting, by the second device, the audio and the transcription.
Communication transfer between devices
A method may include obtaining an indicator that a first device is in a location of a second device and in response to obtaining the indicator, sending a redirect request to a communication service provider of the first device to direct, to the second device, incoming communication requests handled by the communication service provider that are directed to the first device. The method may further include after sending the redirect request and after a communication request to a communication session is directed to the first device, obtaining, at the second device, a communication indication to participate in the communication session. The method may further include directing audio of the communication session to a transcription system and obtaining, at the second device, the transcription of the audio from the transcription system. The method may also include presenting, by the second device, the audio and the transcription.
Virtual personal agent leveraging natural language processing and machine learning
Providing inter-virtual agent communication between communication devices owned by different users is provided. A first communication channel and a second communication channel are established with a remote data processing system. A virtual agent-to-virtual agent handshake is performed during establishment of the first communication channel. Virtual agent commands are exchanged with a remote virtual agent located on the remote data processing system via the first communication channel. An action corresponding to a virtual agent command received from the remote virtual agent located on the remote data processing system is performed while a human conversation is conducted via the second communication channel.
Virtual personal agent leveraging natural language processing and machine learning
Providing inter-virtual agent communication between communication devices owned by different users is provided. A first communication channel and a second communication channel are established with a remote data processing system. A virtual agent-to-virtual agent handshake is performed during establishment of the first communication channel. Virtual agent commands are exchanged with a remote virtual agent located on the remote data processing system via the first communication channel. An action corresponding to a virtual agent command received from the remote virtual agent located on the remote data processing system is performed while a human conversation is conducted via the second communication channel.
Dental Device With Speech Recognition
A dental device with a speech recognition module is provided, which is connected to a control device that controls at least part of the functions of the dental device. Based on the recognition result, the speech recognition module triggers a selected function of the dental device via the control device and has at least one microphone. An output module outputs information about the triggered function. The speech recognition module continuously listens via the microphone and has a code word module that activates or leaves active speech recognition for the temporally successive words when a code word is recognized and attempts to recognize them as predetermined control words each assigned to a function.