Patent classifications
G10L25/48
Context aggregation for data communications between client-specific servers and data-center communications providers
Certain aspects of the disclosure are directed to context aggregation in a data communications network. According to a specific example, user-data communications between a client-specific endpoint device and the other participating endpoint device during a first time period can be retrieved from a plurality of interconnected data communications systems. The client entity can be configured and arranged to interface with a data communications server providing data communications services on a subscription basis. A context can be determined for each respective user-data communication between the endpoint devices during the first time period. A plurality of user-data communications between the client-specific endpoint device and the other participating endpoint device can be aggregated during a second time period, and a context can be determined for the aggregated user-data communications during the second time period based on a comparison of the aggregated user-data communications and the user-data communications during the first time period.
Enhanced graphical user interface for voice communications
Enhanced graphical user interfaces for transcription of audio and video messages is disclosed. Audio data may be transcribed, and the transcription may include emphasized words and/or punctuation corresponding to emphasis of user speech. Additionally, the transcription may be translated into a second language. A message spoken by a user depicted in one or more images of video data may also be transcribed and provided to one or more devices.
Methods and systems for generating domain-specific text summarizations
Embodiments provide methods and systems for generating domain-specific text summary. Method performed by processor includes receiving request to generate text summary of textual content from user device of user and applying pre-trained language generation model over textual content for encoding textual content into word embedding vectors. Method includes predicting current word of the text summary, by iteratively performing: generating first probability distribution of first set of words using first decoder based on word embedding vectors, generating second probability distribution of second set of words using second decoder based on word embedding vectors, and ensembling first and second probability distributions using configurable weight parameter for determining current word. First probability distribution indicates selection probability of each word being selected as current word. Method includes providing custom reward score as feedback to second decoder based on custom reward model and modifying second probability distribution of words for text summary based on feedback.
Methods and systems for generating domain-specific text summarizations
Embodiments provide methods and systems for generating domain-specific text summary. Method performed by processor includes receiving request to generate text summary of textual content from user device of user and applying pre-trained language generation model over textual content for encoding textual content into word embedding vectors. Method includes predicting current word of the text summary, by iteratively performing: generating first probability distribution of first set of words using first decoder based on word embedding vectors, generating second probability distribution of second set of words using second decoder based on word embedding vectors, and ensembling first and second probability distributions using configurable weight parameter for determining current word. First probability distribution indicates selection probability of each word being selected as current word. Method includes providing custom reward score as feedback to second decoder based on custom reward model and modifying second probability distribution of words for text summary based on feedback.
SEMIAUTOMATED RELAY METHOD AND APPARATUS
A relay for captioning a hearing user's (HU's) voice signal during a phone call between an HU and a hearing assisted user (AU), the HU using an HU device and the AU using an AU device where the HU voice signal is transmitted from the HU device to the AU device, the relay comprising a display screen, a processor linked to the display and programmed to perform the steps of receiving the HU voice signal from the AU device, transmitting the HU voice signal to a remote automatic speech recognition (ASR) server running ASR software that converts the HU voice signal to ASR generated text, the remote ASR server located at a remote location from the relay, receiving the ASR generated text from the ASR server, present the ASR generated text for viewing by a call assistant (CA) via the display and transmitting the ASR generated text to the AU device.
ORAL FUNCTION VISUALIZATION SYSTEM, ORAL FUNCTION VISUALIZATION METHOD, AND RECORDING MEDIUM MEDIUM
An oral function visualization system includes: an outputter that outputs information for prompting a user to utter a predetermined voice; an obtainer that obtains an uttered voice of the user uttered in accordance with the output; an analyzer that analyzes the uttered voice obtained by the obtainer; and an estimator that estimates a state of oral organs of the user from a result of analysis of the uttered voice by the analyzer. The outputter outputs, based on the state of the oral organs of the user estimated by the estimator, information for the user to achieve a state of the oral organs suitable for utterance of the predetermined voice.
Personal audio assistant device and method
A system includes a first microphone that captures audio, a communication module communicatively coupled to the first microphone, a logic circuit communicatively coupled to the first microphone and communication module, a speaker operatively coupled to the logic circuit, and an interaction element. The interaction element and logic circuit are configured to initiate control of audio content for output from the speaker in response to at least one voice command detected in captured audio. Other embodiments are disclosed.
Personal audio assistant device and method
A system includes a first microphone that captures audio, a communication module communicatively coupled to the first microphone, a logic circuit communicatively coupled to the first microphone and communication module, a speaker operatively coupled to the logic circuit, and an interaction element. The interaction element and logic circuit are configured to initiate control of audio content for output from the speaker in response to at least one voice command detected in captured audio. Other embodiments are disclosed.
Augmented reality calorie counter
Detecting a chewing noise from a user during a chewing session, triggering operation of a camera, obtaining image data capturing a food product, identifying the food product based on image data, determining a measurement of the chewing session, determining a volume of the food product based on the measurement of the chewing session, and determining a calorie intake based on the food product, the volume of the food product, and the measurement of the chewing session.
Remote care system
The innovation disclosed and claimed herein, in one aspect thereof, comprises systems and methods of remote scheduling of calendar and other means based queues and reminders to a user of a presentation device. The user of the presentation device may be suffering from a progressive cognitive disorder, and the queues and other care provided may be provided remotely by a loved one or other administrator.