Patent classifications
G10L2021/065
System for and Method of Converting Spoken Words and Audio Cues into Spatially Accurate Caption Text for Augmented Reality Glasses
A wearable device with augmented reality glasses that allows a user to see the text information and a microphone array that positions the source of sound in three-dimensional space around the user. When the wearable device is worn by a hearing impaired user, the user would be able to see captioned dialogue that is spoken by those around him along with the position information of the speaker.
Method and system for generation of customised sensory stimulus
A tinnitus treatment system is provided. The system includes a sound processing unit, a haptic stimulus unit and an audio delivery unit. The sound processing unit includes a processor input for receiving an audio signal; and a digital signal processor to analyze the audio signal and generate a plurality of actuation signals therefrom which are representative of the audio signal. The digital signal processor may spectrally modify the audio signal in accordance with a predetermined modification profile to generate a modified audio signal. The haptic stimulus unit includes an array of stimulators each of which independently apply a tactile stimulus to a subject; and a stimulus unit input receives the plurality of actuation signals generated by the digital signal processor and directs individual actuation signals to individual stimulators. The audio delivery unit includes an audio delivery unit input for receiving the modified audio signal generated by the digital signal processor.
Performing artificial intelligence sign language translation services in a video relay service environment
Video relay services, communication systems, non-transitory machine-readable storage media, and methods are disclosed herein. A video relay service may include at least one server configured to receive a video stream including sign language content from a video communication device during a real-time communication session. The server may also be configured to automatically translate the sign language content into a verbal language translation during the real-time communication session without assistance of a human sign language interpreter. Further, the server may be configured to transmit the verbal language translation during the real-time communication session.
Calibration of haptic device using sensor harness
A haptic calibration device comprises a signal generator configured to receive the subjective force value and the force location from a subjective magnitude input device. The signal generator also receives from at least one of a plurality of haptic sensors a sensor voltage value, with the at least one of the plurality of haptic sensors corresponding to the force location. The signal generator stores the subjective force value and the corresponding sensor voltage value in a data store. The signal generator generates a calibration curve indicating a correspondence between subjective force values and sensor voltage values for the location where the subjective force was experienced using the data from the data store, wherein the calibration curve is used to calibrate a haptic feedback device.
Audio improvement using closed caption data
Methods and systems are described herein for improving audio for hearing impaired content consumers. An example method may comprise determining a content asset. Closed caption data associated with the content asset may be determined. At least a portion of the closed caption data may be determined based on a user setting associated with a hearing impairment. Compensating audio comprising a frequency translation associated with at least the portion of the closed caption data may be generated. The content asset may be caused to be output with audio content comprising the compensating audio and the original audio.
Generating visual closed caption for sign language
Embodiments describe an approach for generating a sign language translation of an audio portion of a video. Embodiments receive a request for a sign language translation for a selected video and extract audio from the selected video. Additionally, embodiments, convert the extracted audio into text, identify contextual sounds in the audio, and convert the text and the contextual sounds into sign language content. Furthermore, embodiments, generate a sign language video based on the sign language content, and display the sign language video in a separate display window on the selected video.
Telephone system for the hearing impaired
A telephone system is described herein, wherein the telephone system is configured to assist a hearing-impaired person with telephone communications as well as face-to-face conversations. In telephone communication sessions, the telephone system is configured to audibly emit spoken utterances while simultaneously depicting a transcription of the spoken utterances on a display. When the telephone system is not employed in a telephone communication session, the telephone system is configured to display transcriptions of spoken utterances of people who are in proximity to the telephone system.
WEARABLE VIBROTACTILE SPEECH AID
A method for training vibrotactile speech perception in the absence of auditory speech can comprise selecting a first word, generating a first control signal configured to cause at least one vibrotactile transducer to vibrate against a person's body with a first vibration pattern based on the first word, sampling a second word spoken by the person, generating a second control signal configured to cause at least one vibrotactile transducer to vibrate against the person's body with a second vibration pattern based on the sampled second word, and presenting a comparison between the first word and the second word to the person. An array of vibrotactile transducers can be in contact with the person's body. A method for improving auditory and/or visual speech perception in adverse listening conditions or for hearing-impaired individuals can also comprise sampling a speech signal, extracting a speech envelope, and generating a control signal configured to cause a vibrotactile transducer to vibrate again a person's body with an intensity that varies over time based on the speech envelope.
Envelope encoding of speech signals for transmission to cutaneous actuators
A haptic communication device includes a speech signal generator configured to receive speech sounds or a textual message and generate speech signals corresponding to the speech sounds or the textual message. An envelope encoder is operably coupled to the speech signal generator to extract a temporal envelope from the speech signals. The temporal envelope represents changes in amplitude of the speech signals. Carrier signals having a periodic waveform are generated. Actuator signals are generated by encoding the changes in the amplitude of the speech signals from the temporal envelope into the carrier signals. One or more cutaneous actuators are operably coupled to the envelope encoder to generate haptic vibrations representing the speech sounds or the textual message using the actuator signals.
METHODS AND SYSTEMS FOR PROVIDING IMAGES FOR FACILITATING COMMUNICATION
Aspects of the disclosure include computer-implemented methods and systems for providing generative adversarial network (GAN) digital image data. GAN digital image data corresponding to a suggested transaction for an identified customer can be determined.