Patent classifications
G09B21/009
Alteration of accessibility settings of device based on characteristics of users
In one aspect, a device includes at least one processor and storage accessible to the at least one processor. The storage includes instructions executable by the at least one processor to receive input from at least one sensor, identify a characteristic of a user based on the input from the at least one sensor, and alter at least one setting of the device based on the identification of the characteristic. The at least one setting is related to presentation of content using the device.
SIGN LANGUAGE VIDEO SEGMENTATION METHOD BY GLOSS FOR SIGN LANGUAGE SENTENCE RECOGNITION, AND TRAINING METHOD THEREFOR
There are provided a method for segmenting a sign language video by gloss to recognize a sign language sentence, and a method for training. According to an embodiment, a sign language video segmentation method receives an input of a sign language sentence video, and segments the inputted sign language sentence video by gloss. Accordingly, there is suggested a method for segmenting a sign language sentence video by gloss, analyzing various gloss sequences from the linguistic perspective, understanding meanings robustly in spite of various changes in sentences, and translating sign language into appropriate Korean sentences.
Caption modification and augmentation systems and methods for use by hearing assisted user
A system and method for facilitating communication between an assisted user (AU) and a hearing user (HU) includes receiving an HU voice signal as the AU and HU participate in a call using AU and HU communication devices, transcribing HU voice signal segments into verbatim caption segments, processing each verbatim caption segment to identify an intended communication (IC) intended by the HU upon uttering an associated one of the HU voice signal segments, for at least a portion of the HU voice signal segments (i) using an associated IC to generate an enhanced caption different than the associated verbatim caption, (ii) for each of a first subset of the HU voice signal segments, presenting the verbatim captions via the AU communication device display for consumption, and (iii) for each of a second subset of the HU voice signal segments, presenting enhanced captions via the AU communication device display for consumption.
AUDIO INFORMATION PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The present disclosure relates to an audio information processing method, an apparatus, an electronic device and a computer-readable storage medium. The audio information processing method includes: determining whether an audio recording start condition is satisfied; collecting audio information associated with an electronic device in response to determining that the audio recording start condition is satisfied; performing word segmentation on text information corresponding to the audio information to obtain word-segmented text information; and displaying the word-segmented text information on a user interface of the electronic device.
AUTO-GENERATION OF SUBTITLES FOR SIGN LANGUAGE VIDEOS
Embodiments are disclosed for a subtitle generator for sign language content in digital videos. In some embodiments, a method of subtitle generation includes receiving an input video comprising a representation of one or more sign language gestures, extracting landmark coordinates associated with a signer represented in the input video, determining derivative information from the landmark coordinates, and analyzing the landmark coordinates and the derivative information by at least one gesture detection model to identify a first sign language gesture.
Nuance-based augmentation of sign language communication
In certain embodiments, nuance-based augmentation of gesture may be facilitated. In some embodiments, a video stream depicting sign language gestures of an individual may be obtained via a wearable device associated with a user. A textual translation of the sign language gestures in the video stream may be determined. Emphasis information related to the sign language gestures may be identified based on an intensity of the sign language gestures. One or more display characteristics may be determined based on the emphasis information. The textual translation may be caused to be displayed to the user via the wearable device according to the one or more display characteristics. In some embodiments, a unique voice profile for the individual may be determined. A spoken translation of the sign language gestures may be generated according to the textual translation, the unique voice profile, and the emphasis information.
SKIN AUDIBLE WATCH FOR ORIENTATION IDENTIFICATION AND AN ORIENTATION IDENTIFICATION METHOD
A skin audible watch for orientation identification includes a dial (1) and a strap (2). A plurality of sound collection modules (3) are arranged along a circumference of the dial (1), and the sound collection modules (3) are sequentially connected with a digital filter (4), an analog-to-digital converter (5), a single-chip microcomputer (6), and a row and column drive module (7); the single-chip microcomputer (6) is also connected with vibration motors (8) and a gyroscope (9); a number of the vibration motors corresponds to a number of orientations; the digital filter (4), analog-to-digital converter, single-chip microcomputer, row and column drive module, the vibration motors and the gyroscope are located inside the dial; the row and column drive module is connected with a current contact pin (12), and a free end of the current contact pin extends out of a surface of the vibration motors.
Captioning communication systems
A method to generate a contact list may include receiving an identifier of a first communication device at a captioning system. The first communication device may be configured to provide first audio data to a second communication device. The second communication device may be configured to receive first text data of the first audio data from the captioning system. The method may further include receiving and storing contact data from each of multiple communication devices at the captioning system. The method may further include selecting the contact data from the multiple communication devices that include the identifier of the first communication device as selected contact data and generating a contact list based on the selected contact data. The method may also include sending the contact list to the first communication device to provide the contact list as contacts for presentation on an electronic display of the first communication device.
Display control device, communication device, display control method, and recording medium
An disclosure includes: moving image acquisition unit configured to acquire moving image data obtained through moving image capturing of at least a mouth part of an utterer; a lip detection unit configured to detect a lip part from the moving image data and detect motion of the lip part; a moving image processing unit configured to generate a moving image enhanced to increase the motion of the lip part detected by the lip detection unit; and a display control unit configured to control a display panel to display the moving image generated by the moving image processing unit.
Deaf-specific language learning system and method
Disclosed is a language learning technology for deaf people. A deaf-specific language learning system includes: a sound input device configured to receive a voice from an external source; a learning server configured to store learning data and correction information; a signal processor configured to output voice pattern information corresponding to a voice signal received from the sound input device; a learning processor configured to output learning pattern information regarding the learning data received from the learning server and also output a learning result through similarity analysis; and an actuator controller configured to vibrate a vibration actuator according to the voice pattern information and the learning pattern information.