G09B21/009

Machine-learning conversation listening, capturing, and analyzing system and process for determining classroom instructional effectiveness
11335349 · 2022-05-17 · ·

A machine-learning conversation listening, capturing, and analyzing system that determines instructional effectiveness is a classroom setting and a machine-learning conversation listening, capturing, and analyzing process for determining classroom instructional effectiveness are disclosed. The machine-learning conversation listening, capturing, and analyzing system and process for determining classroom instructional effectiveness relies on predetermined objective criteria and uses big data, deep learning, and redundancy to validate results.

AUDIO SIGNAL PROCESSING FOR AUTOMATIC TRANSCRIPTION USING EAR-WEARABLE DEVICE

A system and method of automatic transcription using a visual display device and an ear-wearable device. The system is configured to process an input audio signal at the display device to identify a first voice signal and a second voice signal from the input audio signal. A representation of the first voice signal and the second voice signal can be displayed on the display device and input can be received comprising the user selecting one of the first voice signal and the second voice signal as a selected voice signal. The system is configured to convert the selected voice signal to text data and display a transcript on the display device. The system can further generate an output signal sound at the first transducer of the ear-wearable device based on the input audio signal.

SMART GLASS INTERFACE FOR IMPAIRED USERS OR USERS WITH DISABILITIES

A headset designed for inclusion of users with impairments is provided. The headset includes a frame, two eyepieces mounted on the frame, and at least one microphone and a speaker, mounted on the frame. The headset also includes a camera, a memory configured to store multiple instructions, and a processor configured to execute the instructions, wherein the instructions comprise to provide to a user an environmental context from a signal provided by the microphone and the camera. A method for using the above headset and a system for performing the method are also provided.

PERFORMING ARTIFICIAL INTELLIGENCE SIGN LANGUAGE TRANSLATION SERVICES IN A VIDEO RELAY SERVICE ENVIRONMENT
20220139417 · 2022-05-05 ·

Video relay services, communication systems, non-transitory machine-readable storage media, and methods are disclosed herein. A video relay service may include at least one server configured to receive a video stream including sign language content from a video communication device during a real-time communication session. The server may also be configured to automatically translate the sign language content into a verbal language translation during the real-time communication session without assistance of a human sign language interpreter. Further, the server may be configured to transmit the verbal language translation during the real-time communication session.

APPARATUS AND A SYSTEM FOR SPEECH AND/OR HEARING THERAPY AND/OR STIMULATION
20230253004 · 2023-08-10 ·

The present disclosure refers to solutions within the field of apparatuses or devices for speech and hearing exercising, for instance improving the awareness of persons with hearing and speech impairments to their own voice and surrounding sounds, allowing to creatively experiment with their own senses and to visualise the sound of their voice and/or additional elements, thereby accelerating the learning process and improving the interaction between patients and therapists.

SIGN LANGUAGE DETECTION FOR SMART GLASSES

A smart glass for incorporating speech recognition in an immersive reality environment is provided. The smart glass includes an eyepiece mounted on a frame including a transparent optical component to provide a user a view of a scene in a real world. The smart glass also includes a first camera configured to capture an image of a hand gesture from an interlocutor in the real world, and a processor configured to recognize, in the image of the hand gesture, a textual meaning. A system including memories storing instructions and processors to execute the instructions to perform methods for use of the above smart glass, and the methods, are also provided.

Communication system for processing audio input with visual display
11315588 · 2022-04-26 ·

A reference acoustic input is processed into a quantization representation such that the quantization representation comprises acoustic components determined from the reference acoustic input, wherein the acoustic components comprise amplitude, rhythm, and pitch frequency of the reference acoustic input. A visual representation is generated that simultaneously depicts the acoustic components comprising amplitude, rhythm, and pitch frequency of the reference acoustic input. A user spoken input may be received and similarly processed and displayed.

Systems and methods for assisting the hearing-impaired using machine learning for ambient sound analysis and alerts
11189265 · 2021-11-30 ·

Systems and Methods for assisting the hearing-impaired are described. The methods rely on obtaining audio signals from the ambient environment of a hearing-impaired person. The audio signals are analyzed by a machine learning model that can classify audio signals into audio categories (e.g. Emergency, Animal Sounds) and audio types (e.g. Ambulance Siren, Dog Barking) and notify the user leveraging a mobile or wearable device. The user can configure notification preferences and view historical logs. The machine learning classifier is periodically trained externally based on labelled audio samples. Additional system features include an audio amplification option and a speech to text option for transcribing human speech to text output.

VIBROTACTILE CONTROL SYSTEMS AND METHODS

Methods and systems are disclosed to facilitate creating the sensation of vibrotactile movement on the body of a user. Vibratory motors are used to generate a haptic language for music or other stimuli that is integrated into wearable technology. The disclosed system in certain embodiments enables the creation of a family of devices that allow people such as those with hearing impairments to experience sounds such as music or other input to the system. For example, a “sound vest” or other wearable array transforms musical input to haptic signals so that users can experience their favorite music in a unique way, and can also recognize auditory or other cues in the user's real or virtual reality environment and convey this information to the user using haptic signals.

NUANCE-BASED AUGMENTATION OF SIGN LANGUAGE COMMUNICATION

In certain embodiments, nuance-based augmentation of gesture may be facilitated. In some embodiments, a video stream depicting sign language gestures of an individual may be obtained via a wearable device associated with a user. A textual translation of the sign language gestures in the video stream may be determined. Emphasis information related to the sign language gestures may be identified based on an intensity of the sign language gestures. One or more display characteristics may be determined based on the emphasis information. The textual translation may be caused to be displayed to the user via the wearable device according to the one or more display characteristics. In some embodiments, a unique voice profile for the individual may be determined. A spoken translation of the sign language gestures may be generated according to the textual translation, the unique voice profile, and the emphasis information.