G10L15/30

Dental Device With Speech Recognition

A dental device with a speech recognition module is provided, which is connected to a control device that controls at least part of the functions of the dental device. Based on the recognition result, the speech recognition module triggers a selected function of the dental device via the control device and has at least one microphone. An output module outputs information about the triggered function. The speech recognition module continuously listens via the microphone and has a code word module that activates or leaves active speech recognition for the temporally successive words when a code word is recognized and attempts to recognize them as predetermined control words each assigned to a function.

State detection and responses for electronic devices

This disclosure describes, in part, techniques for utilizing global models to generate local models for electronic devices in an environment, and techniques for utilizing the global models and/or the local models to provide notifications that are based on anomalies detected within the environment. For instance, a remote system may receive an identifier associated with an electronic device and identify a global model using the identifier. The remote system may then receive data indicating state changes of the electronic device and use the data and the global model to generate a local model associated with the electronic device. Using the global model and/or local model, the remote system can identify anomalies associated with the electronic device and, in response to identifying an anomaly, notify the user. The remote system can further cause the electronic device to change states after receiving a request from the user.

State detection and responses for electronic devices

This disclosure describes, in part, techniques for utilizing global models to generate local models for electronic devices in an environment, and techniques for utilizing the global models and/or the local models to provide notifications that are based on anomalies detected within the environment. For instance, a remote system may receive an identifier associated with an electronic device and identify a global model using the identifier. The remote system may then receive data indicating state changes of the electronic device and use the data and the global model to generate a local model associated with the electronic device. Using the global model and/or local model, the remote system can identify anomalies associated with the electronic device and, in response to identifying an anomaly, notify the user. The remote system can further cause the electronic device to change states after receiving a request from the user.

Spoken language understanding models

Techniques for using a federated learning framework to update machine learning models for spoken language understanding (SLU) system are described. The system determines which labeled data is needed to update the models based on the models generating an undesired response to an input. The system identifies users to solicit labeled data from, and sends a request to a user device to speak an input. The device generates labeled data using the spoken input, and updates the on-device models using the spoken input and the labeled data. The updated model data is provided to the system to enable the system to update the system-level (global) models.

Spoken language understanding models

Techniques for using a federated learning framework to update machine learning models for spoken language understanding (SLU) system are described. The system determines which labeled data is needed to update the models based on the models generating an undesired response to an input. The system identifies users to solicit labeled data from, and sends a request to a user device to speak an input. The device generates labeled data using the spoken input, and updates the on-device models using the spoken input and the labeled data. The updated model data is provided to the system to enable the system to update the system-level (global) models.

Determining topics and action items from conversations

Embodiments are directed to organizing conversation information. Two or more machine learning (ML) models and a plurality of sentences provided from a conversation may be employed to generate insight scores for each sentence such that each insight score correlates to a probability that its sentence includes one or more of an action or a question. In response to one or more sentences having insight scores that exceed a threshold value an information score and a definiteness score may be determined for the one or more sentences. And one or more insights associated with the conversation may be generated based on the one or more sentences. A report may be generated that associates the one or more insights with one or more portions of the conversation that include the one or more sentences that are associated with the insights.

Determining topics and action items from conversations

Embodiments are directed to organizing conversation information. Two or more machine learning (ML) models and a plurality of sentences provided from a conversation may be employed to generate insight scores for each sentence such that each insight score correlates to a probability that its sentence includes one or more of an action or a question. In response to one or more sentences having insight scores that exceed a threshold value an information score and a definiteness score may be determined for the one or more sentences. And one or more insights associated with the conversation may be generated based on the one or more sentences. A report may be generated that associates the one or more insights with one or more portions of the conversation that include the one or more sentences that are associated with the insights.

Transcription generation from multiple speech recognition systems

A method may include obtaining first audio data originating at a first device during a communication session between the first device and a second device. The method may also include obtaining a first text string that is a transcription of the first audio data, where the first text string may be generated using automatic speech recognition technology using the first audio data. The method may also include obtaining a second text string that is a transcription of second audio data, where the second audio data may include a revoicing of the first audio data by a captioning assistant and the second text string may be generated by the automatic speech recognition technology using the second audio data. The method may further include generating an output text string from the first text string and the second text string and using the output text string as a transcription of the speech.

Transcription generation from multiple speech recognition systems

A method may include obtaining first audio data originating at a first device during a communication session between the first device and a second device. The method may also include obtaining a first text string that is a transcription of the first audio data, where the first text string may be generated using automatic speech recognition technology using the first audio data. The method may also include obtaining a second text string that is a transcription of second audio data, where the second audio data may include a revoicing of the first audio data by a captioning assistant and the second text string may be generated by the automatic speech recognition technology using the second audio data. The method may further include generating an output text string from the first text string and the second text string and using the output text string as a transcription of the speech.

Voice controlled remote thermometer

A wireless or remote thermometer connected with an artificial intelligence (AI) system. The thermometer may be in communication with a voice-activated AI system to implement operation thereof, such as a cloud-based AI system implemented on a smart audio interface. User-accessible controls for the thermometer may be activated using the voice-activated AI system on the audio interface. The thermometer may include a wireless transceiver for communication with a user. The thermometer collects temperature measurement data to remotely monitor the temperature of food or other materials. The thermometer connects and communicates wirelessly with a receiver unit, such as user smartphone, tablet, or other computerized device. The thermometer unit sends data, alerts or notifications to the delegated receiver, smart device, and/or audio interface, to the user. Communication between the thermometer and receiver unit may be through one or more communication pathways, which may be selected to provide delivery to the user device.