Patent classifications
G10L2021/065
Systems and methods for remotely tuning hearing devices
A method of tuning a hearing device includes sending a test signal to a model of a hearing device that may be remote from the actual hearing device being tuned. The test signal is encoded by the model and sent to the hearing device being tuned. The user of that hearing device sends a response signal based at least in part on the encoded test signal. This response is received and compared to the original test signal. Thereafter, an operational parameter is sent to the hearing device based on the comparison.
METHOD AND DEVICE FOR HELPING TO UNDERSTAND AN AUDITORY SENSORY MESSAGE BY TRANSFORMING IT INTO A VISUAL MESSAGE
A method and a device for helping a hearing-impaired person to understand an auditory sensory message by transforming the auditory message into a visual message and projecting this message on a support in the field of vision of the hearing-impaired person. This device comprises a support (11) carrying a sensor in the form of microphones (16, 17) to pick up the message, recording and memorizing module (19) for recording and memorizing/storing in real time the auditory message, and transforming the message into a visual message. A screen (18) is placed in the field of vision of the user, and at least one projector (20, 21) projects the visual message on the screen. A sensor detects pupil movement of the user and mechanism converts pupil movement of the user into a display command for the visual message, and carrying out the command by projecting the visual message.
OBSERVER-BASED CANCELLATION SYSTEM FOR IMPLANTABLE HEARING INSTRUMENTS
A method including receiving input indicative of a parameter related to an operating environment of an implantable portion of a prosthesis and adjusting an adjustable system of the prosthesis based on the received input.
Audio improvement using closed caption data
Methods and systems are described herein for improving audio for hearing impaired content consumers. An example method may comprise determining a content asset. Closed caption data associated with the content asset may be determined. At least a portion of the closed caption data may be determined based on a user setting associated with a hearing impairment. Compensating audio comprising a frequency translation associated with at least the portion of the closed caption data may be generated. The content asset may be caused to be output with audio content comprising the compensating audio and the original audio.
PERFORMING ARTIFICIAL INTELLIGENCE SIGN LANGUAGE TRANSLATION SERVICES IN A VIDEO RELAY SERVICE ENVIRONMENT
Video relay services, communication systems, non-transitory machine-readable storage media, and methods are disclosed herein. A video relay service may include at least one server configured to receive a video stream including sign language content from a video communication device during a real-time communication session. The server may also be configured to automatically translate the sign language content into a verbal language translation during the real-time communication session without assistance of a human sign language interpreter. Further, the server may be configured to transmit the verbal language translation during the real-time communication session.
Sentiment-based interactive avatar system for sign language
Systems and methods for doing presenting an avatar that speaks sign language based on sentiment of a speaker is disclosed herein. A translation application running on a device receives a content item comprising a video and an audio, wherein the audio comprises a first plurality of spoken words in a first language. The video comprises a character speaking the first plurality of spoken words in the first language. The translation application translates the first plurality of spoken words of the first language into a first sign of a first sign language. The translation application determines an emotional state expressed by the character based on sentiment analysis. The translation application generates an avatar that speaks the first sign of the first sign language where the avatar exhibits the determined emotional state. The content item and the avatar are presented for display on the device.
AUDIO IMPROVEMENT USING CLOSED CAPTION DATA
Methods and systems are described herein for improving audio for hearing impaired content consumers. An example method may comprise determining a content asset. Closed caption data associated with the content asset may be determined. At least a portion of the closed caption data may be determined based on a user setting associated with a hearing impairment. Compensating audio comprising a frequency translation associated with at least the portion of the closed caption data may be generated. The content asset may be caused to be output with audio content comprising the compensating audio and the original audio.
Augmented in-vehicle communication
A communication augmentation system includes a camera, a transceiver and a computer. The camera is operation to record an image of users. The transceiver is operational to receive inbound messages from wireless devices. The inbound messages include an input content. The computer is operational to store registrations of the users, determine user locations where the users are located in response to facial profiles relative to the image, determine device locations where the wireless devices are located based on the inbound messages, associate the wireless devices with the users based on the user locations and the device locations, determine destinations of the inbound messages based on a comparison of input content to the identifiers, and transfer the input content and the destinations to the transceiver. The transceiver is further operational to transmit the input content in a plurality of outbound messages to the wireless devices based on the destinations.
Method, computing device, and non-transitory computer-readable recording medium to translate audio of video into sign language through avatar
A sign language translation method performed by at least one processor includes setting a sign language translation avatar in a video call, translating speech of at least one speaker into sign language during the video call, and displaying the sign language through the sign language translation avatar during the video call.
SENTIMENT-BASED INTERACTIVE AVATAR SYSTEM FOR SIGN LANGUAGE
Systems and methods for doing presenting an avatar that speaks sign language based on sentiment of a speaker is disclosed herein. A translation application running on a device receives a content item comprising a video and an audio, wherein the audio comprises a first plurality of spoken words in a first language. The video comprises a character speaking the first plurality of spoken words in the first language. The translation application translates the first plurality of spoken words of the first language into a first sign of a first sign language. The translation application determines an emotional state expressed by the character based on sentiment analysis. The translation application generates an avatar that speaks the first sign of the first sign language where the avatar exhibits the determined emotional state. The content item and the avatar are presented for display on the device.