Patent classifications
G09B21/009
AUTOMATIC TRANSLATION BETWEEN SIGN LANGUAGE AND SPOKEN LANGUAGE
Methods, apparatus, systems, and articles of manufacture to translation between sign language and spoken language are disclosed. An example apparatus includes processor circuitry to at least one of instantiate or execute machine readable instructions to identify a plurality of candidate signs across different frames in video; associate a respective gloss to respective ones of the candidate signs; associate a respective confidence score with the respective glosses; identify overlapping frames of the candidate signs; select one or more of the candidate signs as performed signs based on the respective confidence scores and overlapping frames; and convert the performed signs to audio data.
Method and device for reading, writing, and communication by deafblind users
A method and device for reading, writing, and communication by deafblind users is provided to enable such exemplary functions as word processing, text messaging, Internet access, and telephonic communication. By combining a chordic keyboard for user input with a self-scrolling Braille pad for reading Braille, embodiments of the invention enable the user's hands to stay in place on a user console rather than having to constantly switch back and forth between typing messages versus reading or checking for messages. This in turn enables duplex communication because the user can read or acknowledge incoming messages even while typing. It also reduces the dynamic complexity experienced in reading Braille because a body part used for reading Braille can remain constantly available for receiving messages simply by resting in place on the self-scrolling Braille pad without any swiping, thereby eliminating swiping gestures and the problem of timing them with the receipt of messages.
CAPTIONING COMMUNICATION SYSTEMS
A method to generate a contact list may include receiving an identifier of a first communication device at a captioning system. The first communication device may be configured to provide first audio data to a second communication device. The second communication device may be configured to receive first text data of the first audio data from the captioning system. The method may further include receiving and storing contact data from each of multiple communication devices at the captioning system. The method may further include selecting the contact data from the multiple communication devices that include the identifier of the first communication device as selected contact data and generating a contact list based on the selected contact data. The method may also include sending the contact list to the first communication device to provide the contact list as contacts for presentation on an electronic display of the first communication device.
CAPTION MODIFICATION AND AUGMENTATION SYSTEMS AND METHODS FOR USE BY HEARING ASSISTED USER
A system and method for facilitating communication between an assisted user (AU) and a hearing user (HU) includes receiving an HU voice signal as the AU and HU participate in a call using AU and HU communication devices, transcribing HU voice signal segments into verbatim caption segments, processing each verbatim caption segment to identify an intended communication (IC) intended by the HU upon uttering an associated one of the HU voice signal segments, for at least a portion of the HU voice signal segments (i) using an associated IC to generate an enhanced caption different than the associated verbatim caption, (ii) for each of a first subset of the HU voice signal segments, presenting the verbatim captions via the AU communication device display for consumption, and (iii) for each of a second subset of the HU voice signal segments, presenting enhanced captions via the AU communication device display for consumption.
SYSTEMS AND METHODS FOR COMMUNICATING WITH VISION AND HEARING IMPAIRED VEHICLE OCCUPANTS
Systems and methods associated with a vehicle are provided. The systems and methods include an occupant output system including an output device, a camera or other perception device, and a processor in operable communication with the occupant output system and the camera or other perception device. The processor is configured to execute program instructions to cause the processor to: receive image or other perception data from the camera or other perception device, the image or other perception data including at least part of a head and/or body of an occupant of the vehicle, analyze the image or other perception data to determine if the occupant is of hearing and vision impaired, when the occupant is determined to be of vision and hearing impaired, decide on an output modality to assist the occupant, and generate an output for the occupant on the output device, and in the output modality.
METHODS, SYSTEMS, and MACHINE-READABLE MEDIA FOR TRANSLATING SIGN LANGUAGE CONTENT INTO WORD CONTENT and VICE VERSA
Communication systems, methods, and non-transitory machine-readable storage media are disclosed herein. A communication system may include a communication device configured to receive a video stream including sign language content and any content indicators associated with the video stream during a real-time communication session within a single communication device or between a plurality of communication devices. The communication system may also include a translation engine configured to automatically translate the sign language content into word content during the real-time communication session without assistance of a human sign language interpreter. Further, the communication system may be configured to output the word content translation to a communication device during the real-time communication session.
System and Method for Virtual Tour Accessibility
A computer system for electronically creating Americans with Disabilities Act (ADA) compliant virtual tours, comprising a non-transitory computer readable memory storing instructions and one or more processors when executing the instructions is configured to receive input from a user of a virtual tour, a point-of-interest, a name, and description associated with the point-of-interest, embed ADA compliant code in one or more files associated with the virtual tour related to the point-of-interest, name, and description, and execute the virtual tour including the embedded ADA compliant code.
ACCOMMODATION MEASURE EVALUATION DEVICE, ACCOMMODATION MEASURE EVALUATION PROGRAM, AND ACCOMMODATION MEASURE EVALUATION METHOD
A device and a method storing accommodation measure evaluation data that associates an accommodation measure for a person in a minority group with evaluation of the accommodation measure are provided. When the accommodation measure to be provided for the person in the minority group is input, an evaluation of the accommodation measure to be provided is acquired based on the input accommodation measure and the accommodation measure evaluation data, and the evaluation is displayed on a display monitor or the like. Furthermore, a higher evaluation is given to a specific accommodation measure (first accommodation measure) than another accommodation measure (second accommodation measure), which enables the more detailed evaluation to be performed.
Sound Detection Alerts
Custom alerts may be generated based on sound type indicators determined using a machine learning classification model trained on user-provided sound recordings and user-defined sound type indicators. A device may provide a sound recording and a type indicator identifying an entity that made a sound in the recording for storage in a database that includes a plurality sound recordings associated with a plurality of type indicators. A machine learning classification model may be trained based on the stored recordings, including the user-defined recordings. The model may be used to classify sounds recorded by other devices and generate alerts identifying the type of sound. Thus, multiple users may contribute data to customize machine learning models that recognize sounds and generate alerts based on user-defined identifiers.
Vibrotactile control systems and methods
Methods and systems are disclosed to facilitate creating the sensation of vibrotactile movement on the body of a user. Vibratory motors are used to generate a haptic language for music or other stimuli that is integrated into wearable technology. The disclosed system in certain embodiments enables the creation of a family of devices that allow people such as those with hearing impairments to experience sounds such as music or other input to the system. For example, a “sound vest” or other wearable array transforms musical input to haptic signals so that users can experience their favorite music in a unique way, and can also recognize auditory or other cues in the user's real or virtual reality environment and convey this information to the user using haptic signals.