Patent classifications
G09B21/009
Apparatus and method for education and learning
Embodiments in accordance with the present disclosure provide an apparatus that facilitates education and learning through tactile sensory demonstration and action. The apparatus includes a first part and a second part attached to the first part. The first part may be worn by a guide, while the second part may be worn by a child or a student undergoing training. The apparatus allows the guide to control the movements of the child or the student.
CONVERTING SIGN LANGUAGE
Methods and devices related to converting sign language are described. In an example, a method can include receiving, at a processing resource of a computing device via a radio of the computing device, first signaling including at least one of text data, audio data, or video data, or any combination thereof, converting, at the processing resource, at least one of the text data, the audio data, or the video data to data representing a sign language, generating, at the processing resource, different video data based at least in part on the data representing the sign language, wherein the different video data comprises instructions for display of a performance of the sign language, transmitting second signaling representing the different video data from the processing resource to a user interface, and displaying the performance of the sign language on the user interface in response to the user interface receiving the second signaling.
Smart Wearable Sensor-Based Bi-Directional Assistive Device
Wearable systems and methods to allow for a two-way communication to convey ASL signing to a deaf person and translate the ASL to sound for communication to people with normal hearing. The system will also display signs to a deaf-mute person to communicate by having signs displayed visually from the corresponding sound or text input. The input can be captured on the device or streamed to the device using short range radio and the on-board computer will translate the different lingual inputs to a final output via a wireless device speaker or display. The LIDAR sensor on the device is able to sense the depth of the signs and should work in the low-light settings.
Sound detection alerts
Custom alerts may be generated based on sound type indicators determined using a machine learning classification model trained on user-provided sound recordings and user-defined sound type indicators. A device may provide a sound recording and a type indicator identifying an entity that made a sound in the recording for storage in a database that includes a plurality sound recordings associated with a plurality of type indicators. A machine learning classification model may be trained based on the stored recordings, including the user-defined recordings. The model may be used to classify sounds recorded by other devices and generate alerts identifying the type of sound. Thus, multiple users may contribute data to customize machine learning models that recognize sounds and generate alerts based on user-defined identifiers.
Alarm monitoring system
The alarm monitoring system provides various alerts of events that are not readily discernable to those with hearing difficulties. The system generates visual, vibratory, and high decibel alerts separately or in combination to alert the user to a wide variety of events for those with multisensory impairments.
Wearable vibrotactile speech aid
A method for training vibrotactile speech perception in the absence of auditory speech includes selecting a first word, generating a first control signal configured to cause at least one vibrotactile transducer to vibrate against a person's body with a first vibration pattern based on the first word, sampling a second word spoken by the person, generating a second control signal configured to cause at least one vibrotactile transducer to vibrate against the person's body with a second vibration pattern based on the sampled second word, and presenting a comparison between the first word and the second word to the person. An array of vibrotactile transducers can be in contact with the person's body. A method for improving auditory and/or visual speech perception in adverse listening conditions or for hearing-impaired individuals can also include sampling a speech signal, extracting a speech envelope, and generating a control signal configured to cause a vibrotactile transducer to vibrate again a person's body with an intensity that varies over time based on the speech envelope.
INTERACTION SYSTEM, METHOD AND DEVICE
Embodiments of the disclosure provide an interaction system, method and device. The interaction system includes an electroencephalogram electrode array covering a surface of a head of a user and including at least one electroencephalogram electrode; a micro microphone array comprising at least one micro microphone; a signal processing circuit electrically connected with the electroencephalogram electrode array and the micro microphone array respectively; a memory configured to store data and electrically connected with the signal processing circuit; and at least one program stored in the memory and configured to implement, when executed by the signal processing circuit, determining a position of a sound source based on audio signals of the sound source output by the micro microphone array, determining an electroencephalogram electrode of the electroencephalogram electrode array corresponding to the position of the sound source, and invoking the corresponding electroencephalogram electrode to output a touch signal to make the user perceiving.
Vibrotactile music perception wearable
A wearable includes an article of clothing, an intelligent control and a plurality of sets of vibration motors operatively connected to the intelligent control and positioned on the article of clothing to provide tactile feedback wherein each of the sets of vibration motors is associated with a different frequency range of audio which may be communicated to the wearable via Bluetooth or otherwise.
Systems for augmented reality visual aids and tools
Adaptive Control Driven System/ACDS 99, supports visual enhancement, mitigation of challenges and with basic image modification algorithms and any known hardware from contact lenses to IOLs to AR hardware glasses, and enables users to enhance vision with user interface based on a series of adjustments that are applied to move, modify, or reshape image sets and components with full advantage of the remaining useful retinal area, thus addressing aspects of visual challenges heretofore inaccessible by devices which learn needed adjustments.
REALITY-AUGMENTED MORPHOLOGICAL PROCEDURE
Data representative of a physical feature of a morphologic subject is received in connection with a procedure to be carried out with respect to the morphologic subject. A view of the morphologic subject overlaid by a virtual image of the physical feature is rendered for a practitioner of the procedure, including generating the virtual image of the physical feature based on the representative data, and rendering the virtual image of the physical feature within the view in accordance with one or more reference points on the morphologic subject such that the virtual image enables in-situ visualization of the physical feature with respect to the morphologic subject.