Patent classifications
G09B21/00
METHOD, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM TO TRANSLATE AUDIO OF VIDEO INTO SIGN LANGUAGE THROUGH AVATAR
A sign language translation method performed by at least one processor includes setting a sign language translation avatar in a video call, translating speech of at least one speaker into sign language during the video call, and displaying the sign language through the sign language translation avatar during the video call.
SPATIALLY ACCURATE SIGN LANGUAGE CHOREOGRAPHY IN MULTIMEDIA TRANSLATION SYSTEMS
Systems, methods, and computer-readable media herein provide for real-time manipulation and animation of 3D rigged virtual models to generate sign language translation. Source video and audio data associated with content is provided to a neural network to determine choreographic actions that may be used to modify and animate the articulation control points of a 3D model within a 3D space. The animated 3D virtual model may be presented in relation to the source content to provide sign language translation of the source content.
Visual feedback system
A visual feedback system can include a display panel, an interface unit, and at least one visual feedback device. The at least one visual feedback device can be configured to provide cues for audio generated within a virtual environment.
Alteration of accessibility settings of device based on characteristics of users
In one aspect, a device includes at least one processor and storage accessible to the at least one processor. The storage includes instructions executable by the at least one processor to receive input from at least one sensor, identify a characteristic of a user based on the input from the at least one sensor, and alter at least one setting of the device based on the identification of the characteristic. The at least one setting is related to presentation of content using the device.
SIGN LANGUAGE VIDEO SEGMENTATION METHOD BY GLOSS FOR SIGN LANGUAGE SENTENCE RECOGNITION, AND TRAINING METHOD THEREFOR
There are provided a method for segmenting a sign language video by gloss to recognize a sign language sentence, and a method for training. According to an embodiment, a sign language video segmentation method receives an input of a sign language sentence video, and segments the inputted sign language sentence video by gloss. Accordingly, there is suggested a method for segmenting a sign language sentence video by gloss, analyzing various gloss sequences from the linguistic perspective, understanding meanings robustly in spite of various changes in sentences, and translating sign language into appropriate Korean sentences.
VISUAL ASSISTANCE SYSTEM
A visual assistance system or distance and location detection system is provided. The system comprises a sensor detecting distance of objects in a user's path and alerting the user when an object is within a designated proximity to the user. The system further comprises a location detection system using GPS technology that tracks the user's location and issues an alert to the user or other individual when the user's location exceeds the boundaries of a designated safe zone.
RFID tag in display adapter
An adapter for a tactile display is disclosed. The adapter can be used with a tactile display, such as a display that presents braille characters, to change or modify what is presented to a user. In some embodiments, the braille dots which make up a braille character can be made smaller or larger; spaced closer together or further apart; have a different shape; and/or an image or non-braille characters can be presented to a user of the tactile display using the disclosed adapter.
INTERFACE FOR VISUALLY IMPAIRED
A user interface for a visually impaired user of a mobile computer system. A touch interface is enabled on a touch screen and an audio interface is enabled on a loudspeaker and a microphone. Remotely stored content is previously ordered in a list or tree. The previously ordered remotely stored content is accessible to the visually impaired person by playing the content on the loudspeaker using multiple inputs including a touch gesture on the touch screen and a speech input into the microphone. In accordance with the inputs, an order is changed of navigating the content within a list for a future use of the user interface
NAVIGATION SYSTEM FOR THE VISUALLY IMPAIRED
A method is disclosed as including sending, to a remote server, information identifying a first location within a building. In response to such sending, data may be received from the remote server. The data may comprise a plurality of unique identifiers, each unique identifier thereof corresponding to a unique radio tag located in a unique location within the building. The data may be used to deliver to a user a first series of audible commands guiding the user within the building toward the first location. For example, a first radio transmission may be received from a first radio tag located within the building. The first radio transmission may communicate a first identifier that is associated within the data to first navigation information. Thus, the first navigation information may be used to guide the user within the building toward the first location.
Intersection blind-guiding system, blind-guiding method and blind-guiding terminal
An intersection blind-guiding system includes: a blind-guiding terminal, a plurality of blind road sensors, and a processor configured to be coupled to the blind road sensors. The blind-guiding terminal includes a blind-guiding terminal sensor configured to transmit a sensing signal. The blind road sensors include at least one first blind road sensor and at least one second blind road sensor. The blind road sensors are configured to separately receive the sensing signal transmitted by the blind-guiding terminal sensor, and upload corresponding sensing information. The processor is configured to receive the sensing information uploaded by the blind road sensors, locate a current position of a blind person carrying the blind-guiding terminal, determine a direction in which the blind person previously traveled, determine geographic distribution information of the current position of the blind person, and send a command carrying the geographical distribution information.