Patent classifications
G09B21/009
METHOD AND WEARABLE DEVICE FOR DETECTING AND VERBALIZING NONVERBAL COMMUNICATION
A triboelectric sensor device with a substantially cylindrical nonconductive core, and a conductive fiber substantially helically disposed around the conductive core and in an axial direction thereof. Example implementations also include a method of extracting communication from body position, by transforming one or more training body position inputs by a principal component analysis, generating training input to a support vector machine (SVM) based on a target body position, and generating one or more SVM classification outputs associated with the target body position.
Sign language information processing method and apparatus, electronic device and readable storage medium
Sign language information processing method and apparatus, an electronic device and a readable storage medium provided by the present disclosure, achieve real-time collection of language data in a current communication of a user by obtaining voice information and video information collected by a user terminal in real time; and then match a speaking person with his or her speaking content by determining, in the video information, a speaking object corresponding to the voice information; and finally, make it possible for the user to clarify the corresponding speaking object when the user sees AR sign language animation in a sign language video by superimposing and displaying an augmented reality AR sign language animation corresponding to the voice information on a gesture area corresponding to the speaking object to obtain a sign language video. Therefore, it is possible to provide a higher user experience.
INDIVIDUALIZED REHABILITATION TRAINING OF A HEARING PROSTHESIS RECIPIENT
Presented herein are techniques for increasing the user experience of implantable hearing prostheses through improvements in post-implantation clinical care.
APPARATUS AND METHOD FOR EDUCATION AND LEARNING
Embodiments in accordance with the present disclosure provide an apparatus that facilitates education and learning through tactile sensory demonstration and action. The apparatus includes a first part and a second part attached to the first part. The first part may be worn by a guide, while the second part may be worn by a child or a student undergoing training. The apparatus allows the guide to control the movements of the child or the student.
Automated sign language translation and communication using multiple input and output modalities
Methods, apparatus and systems for recognizing sign language movements using multiple input and output modalities. One example method includes capturing a movement associated with the sign language using a set of visual sensing devices, the set of visual sensing devices comprising multiple apertures oriented with respect to the subject to receive optical signals corresponding to the movement from multiple angles, generating digital information corresponding to the movement based on the optical signals from the multiple angles, collecting depth information corresponding to the movement in one or more planes perpendicular to an image plane captured by the set of visual sensing devices, producing a reduced set of digital information by removing at least some of the digital information based on the depth information, generating a composite digital representation by aligning at least a portion of the reduced set of digital information, and recognizing the movement based on the composite digital representation.
Automated real time interpreter service
Aspects of the present disclosure relate to systems and methods that aid users with hearing and/or speech impediments to have a conversation with a remote phone number without human assistance. In one aspect, an application residing on a user's device, such as a smartphone, tablet computer, laptop, etc., may be used to initiate a phone call to a recipient. Upon initiating the phone call locally, a service residing on the server may receive a request to initiate a connection to the recipient. Once the recipient answers, the user may converse with the recipient by providing text input to their local app. The text input may be transmitted to the service. The service may use a text to speech converter to translate the received text to speech that can be delivered to the recipient.
Telephone system for the hearing impaired
A telephone system is described herein, wherein the telephone system is configured to assist a hearing-impaired person with telephone communications as well as face-to-face conversations. In telephone communication sessions, the telephone system is configured to audibly emit spoken utterances while simultaneously depicting a transcription of the spoken utterances on a display. When the telephone system is not employed in a telephone communication session, the telephone system is configured to display transcriptions of spoken utterances of people who are in proximity to the telephone system.
METHOD, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM TO TRANSLATE AUDIO OF VIDEO INTO SIGN LANGUAGE THROUGH AVATAR
A sign language translation method performed by at least one processor includes setting a sign language translation avatar in a video call, translating speech of at least one speaker into sign language during the video call, and displaying the sign language through the sign language translation avatar during the video call.
SPATIALLY ACCURATE SIGN LANGUAGE CHOREOGRAPHY IN MULTIMEDIA TRANSLATION SYSTEMS
Systems, methods, and computer-readable media herein provide for real-time manipulation and animation of 3D rigged virtual models to generate sign language translation. Source video and audio data associated with content is provided to a neural network to determine choreographic actions that may be used to modify and animate the articulation control points of a 3D model within a 3D space. The animated 3D virtual model may be presented in relation to the source content to provide sign language translation of the source content.
Visual feedback system
A visual feedback system can include a display panel, an interface unit, and at least one visual feedback device. The at least one visual feedback device can be configured to provide cues for audio generated within a virtual environment.