Patent classifications
G09B21/00
SENTIMENT-BASED INTERACTIVE AVATAR SYSTEM FOR SIGN LANGUAGE
Systems and methods for doing presenting an avatar that speaks sign language based on sentiment of a speaker is disclosed herein. A translation application running on a device receives a content item comprising a video and an audio, wherein the audio comprises a first plurality of spoken words in a first language. The video comprises a character speaking the first plurality of spoken words in the first language. The translation application translates the first plurality of spoken words of the first language into a first sign of a first sign language. The translation application determines an emotional state expressed by the character based on sentiment analysis. The translation application generates an avatar that speaks the first sign of the first sign language where the avatar exhibits the determined emotional state. The content item and the avatar are presented for display on the device.
Devices and methods for providing tactile feedback
A device for providing a tactile feedback includes an imaging device configured to capture an image of a face of a subject, a tactile feedback device, and a controller communicatively coupled to the imaging device and the tactile feedback device. The controller comprising at least one processor and at least one memory storing computer readable and executable instructions that, when executed by the processor, causes the controller to: process the image, determine a type of a facial expression based on the processed image, determine a level of a facial expression of the type based on the processed image, determine a tactile feedback intensity of the tactile feedback device based on the level of the facial expression, and control the tactile feedback device to provide a tactile feedback having the tactile feedback intensity.
Presentation of communications
A method to present communications is provided. The method may include obtaining, at a device, a request from a user to play back a stored message that includes audio. In response to obtaining the request, the method may include directing the audio of the message to a transcription system from the device. In these and other embodiments, the transcription system may be configured to generate text that is a transcription of the audio in real-time. The method may further include obtaining, at the device, the text from the transcription system and presenting, by the device, the text generated by the transcription system in real-time. In response to obtaining the text from the transcription system, the method may also include presenting, by the device, the audio such that the text as presented is substantially aligned with the audio.
Method, apparatus, and terminal for providing sign language video reflecting appearance of conversation partner
Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further comprises acquiring multiple pieces of word-joint information corresponding to the speech language sentence from joint information database, sequentially inputting the word-joint information to a deep learning neural network to generate sentence-joint information, generating a motion model on the basis of the sentence-joint information, and generating a sign language video in which the background image and the appearance image are synthesized with the motion model. The method provides a natural communication environment between a sign language user and a speech language user.
Automatically modifying display presentations to programmatically accommodate for visual impairments
Methods, apparatus, systems, computing devices, computing entities, and/or the like for identifying one or more visual impairments of a user, mapping the visual impairments to one or more accessibility solutions, (e.g., program code entries) and dynamically modifying a display presentation based at least in part on the identified accessibility solutions.
Transportation system used by individuals having a visual impairment utilizing 5G communications
A computer-implemented system and method for a transportation system comprises, using a processor associated with a service, initializing information collectors and response monitors by loading configurations, user settings, and data structures to capture device information from a plurality of devices that each run a virtual agent (VA). Fifth generation (5G) communication links are established between 5G server orchestration service instances (SOSIs) and respective VAs on the devices. Captured live status information by the SOSIs is received from various devices. A 5G dynamic ad-hoc network (DAHN) connects a user device of a user having visual impairment and a vehicle stop device, the DAHN creation being triggered by the user device entering a stop device boundary. An SOSI receives user-vehicle stop information after the connecting to the DAHN. Control information is transmitted to the vehicle device VA related to the user when the user device is located within the stop boundary.
A VOTING APPARATUS
A voting apparatus for receiving a ballot paper comprises a voting aid for overlying the ballot paper, the voting aid comprising a respective ballot marking aid for each voting option of the ballot paper. The apparatus includes an audio system and an activation button for voting option. The audio system audibly renders the relevant ballot information in response to operation of any one of the buttons.
System and Method of Managing a Lottery Service for Visually-Impaired Users
A system and a method of managing a lottery service for visually-impaired users allow users to read a lottery ticket with a plurality of braille-inscribed ticket numbers. The system includes a PC device, at least one remote server, at least one physical lottery ticket, and at least one external server. The method begins by scanning the braille-inscribed ticket numbers off the physical lottery ticket with the PC device. The braille-inscribed numbers are converted into a plurality of digital ticket numbers with the PC device. The digital ticket numbers are relayed from the PC device to the remote server. A plurality of winning numbers is then received for the lottery service from the external server with the remote server. If the digital ticket numbers match the plurality of winning numbers, a lottery winning notification is generated with the remote server and is then outputted with the PC device.
NONVERBAL MULTI-INPUT AND FEEDBACK DEVICES FOR USER INTENDED COMPUTER CONTROL AND COMMUNICATION OF TEXT, GRAPHICS AND AUDIO
There is disclosed devices, systems and methods for nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio. The system comprises sensory devices comprising sensors to detect a user inputting gestures on sensor interfaces, a cloud system comprising a processor, for retrieving the inputted gestures detected by the sensor on the sensory device, comparing the inputted gestures to gestures stored in databases on the cloud system, identifying at least a text, graphics and/or speech command comprising a word that corresponds to the inputted gesture; showing the command to the user; and transmitting the command to another device.
PASSWORD INPUT METHOD
A password input method is disclosed. The password input method is conducted by a microprocessor of a touch sensitive password input device, wherein the touch sensitive password input device is integrated in an electronic device, such as point-of-sale payment terminal, smartphone, tablet computer, all-in-one computer, door station, and keyless electronic door lock. In case of the password input method according to the present invention being conducted, the touch sensitive password input device is controlled to guide a visually impaired person to successfully complete a password input operation with high security.