Patent classifications
G09B21/00
METHOD AND WEARABLE DEVICE FOR DETECTING AND VERBALIZING NONVERBAL COMMUNICATION
A triboelectric sensor device with a substantially cylindrical nonconductive core, and a conductive fiber substantially helically disposed around the conductive core and in an axial direction thereof. Example implementations also include a method of extracting communication from body position, by transforming one or more training body position inputs by a principal component analysis, generating training input to a support vector machine (SVM) based on a target body position, and generating one or more SVM classification outputs associated with the target body position.
Hands-Free Crowd Sourced Indoor Navigation System and Method for Guiding Blind and Visually Impaired Persons
The present invention discloses an indoor Electronic Traveling Aid (ETA) system for blind and visually impaired (BVI) people. The system comprises a headband, intuitive tactile display with myographic (EMG) feedback, controller, and server-based methods corresponding to three operation modalities. In 1.sup.st modality, sighted users mark routes, map navigational directions, and create semantic comments for BVIs. This information of routes is continuously collected and estimated in ETA servers. In the 2.sup.nd modality, BVIs choose the routes from servers, thereby, are supplied with real-time navigational guidance. Also, an EMG interface is used, where the user's facial muscles are enabled is to send commands to the ETA system. In the 3.sup.rd modality, BVIs receive real-time audio guidance in complex or unforeseen situations: ETA provides a crowd-assisted interface and real-time sensory (e.g., video) data, where crowd-assistants analyze the situation and help the BVI to navigate.
Method of providing contents regarding image capturing to image capture apparatus
A method includes providing a mission regarding image capturing to a user terminal having an image capture function, evaluating an image transmitted from the user terminal in response to the mission, and additionally providing a new mission regarding image capturing to the user terminal in a case where a level of achievement of the mission is determined to satisfy a criterion based on the evaluation.
Systems, methods, and apparatuses for implementing a GPS directional swimming watch for the eyesight impaired
In accordance with embodiments disclosed herein, there are provided systems, methods, and apparatuses for implementing a GPS directional swimming watch for the eyesight impaired. For example, according to one embodiment there is a wearable navigational apparatus including: a mechanical input to receive coordinates for a first location located at an end of a first fixed segment originating from an starting point in a first single cardinal direction; a mechanical input to receive coordinates for a second location located at the end of a second fixed segment originating from the first location in a second single cardinal direction perpendicular to the first cardinal direction, in which the first and second fixed segments form a selected route; a haptic feedback motor having a magnetized compass integrated therein to signal a wearer directional information relative to the first and second locations set, in which the hepatic feedback motor signals the wearer to change direction upon any of: (i) reaching the first location, (ii) reaching the second location, and (iii) deviating from any point along the selected route during bidirectional navigation; and a return function to signal to the wearer, via the haptic feedback motor, directional information relative to the starting point from any point along the selected course. Other related embodiments are described.
Systems, methods, and apparatuses for implementing a GPS directional swimming watch for the eyesight impaired
In accordance with embodiments disclosed herein, there are provided systems, methods, and apparatuses for implementing a GPS directional swimming watch for the eyesight impaired. For example, according to one embodiment there is a wearable navigational apparatus including: a mechanical input to receive coordinates for a first location located at an end of a first fixed segment originating from an starting point in a first single cardinal direction; a mechanical input to receive coordinates for a second location located at the end of a second fixed segment originating from the first location in a second single cardinal direction perpendicular to the first cardinal direction, in which the first and second fixed segments form a selected route; a haptic feedback motor having a magnetized compass integrated therein to signal a wearer directional information relative to the first and second locations set, in which the hepatic feedback motor signals the wearer to change direction upon any of: (i) reaching the first location, (ii) reaching the second location, and (iii) deviating from any point along the selected route during bidirectional navigation; and a return function to signal to the wearer, via the haptic feedback motor, directional information relative to the starting point from any point along the selected course. Other related embodiments are described.
Sign language information processing method and apparatus, electronic device and readable storage medium
Sign language information processing method and apparatus, an electronic device and a readable storage medium provided by the present disclosure, achieve real-time collection of language data in a current communication of a user by obtaining voice information and video information collected by a user terminal in real time; and then match a speaking person with his or her speaking content by determining, in the video information, a speaking object corresponding to the voice information; and finally, make it possible for the user to clarify the corresponding speaking object when the user sees AR sign language animation in a sign language video by superimposing and displaying an augmented reality AR sign language animation corresponding to the voice information on a gesture area corresponding to the speaking object to obtain a sign language video. Therefore, it is possible to provide a higher user experience.
Audio mobility map
A system for spatial representation of regions of interest for a visually impaired or blind person, comprises the following elements: at least one tactile map comprising a top surface and a bottom surface, the tactile map having tactile reference marks on the top surface and corresponding contact areas on the bottom surface, each tactile reference mark corresponding to a region of interest; a keyboard with a matrix of contact points configured to come into contact with the corresponding contact areas in response to pressure exerted on the tactile map positioned on the keyboard; and an electronic audio box, which can be actuated by the keyboard, the electronic audio box being provided with a multitude of audio recordings, each audio recording being associated with each tactile reference mark on the top surface of the tactile map.
ELECTROMECHANICAL ACTUATORS FOR HAPTIC FEEDBACK IN ELECTRONIC DEVICES
Electromechanical actuators may be constructed as cylindrical elements with electrodes position around the cylindrical element. The electrodes may receive an electrical signal that causes a core material in the electromechanical actuator to change shape, thus providing haptic feedback to a user, such as when the actuators are integrated with a display screen of a smart phone. A position of the electrodes around the core material may affect a mode of operation of the electromechanical actuators. In one configuration, two electrodes may be located at opposite ends of the cylindrical element along a long axis of the cylinder. In another configuration, two electrodes may be located opposite each other along a circumference of the cylinder. Signals may be applied to the electrodes to generate vibrational feedback or textures on the display screen.
APPARATUS AND METHOD FOR AUGMENTING SIGHT
A method of augmenting sight in an individual. The method comprises obtaining an image of a scene using a camera carried by the individual; transmitting the obtained image to a processor carried by the individual; selecting an image modification to be applied to the image by the processor; operating upon the image to create a modified image using either analog or digital imaging techniques, and displaying the modified image on a display device worn by the individual. The invention also relates to an apparatus augmenting sight in an individual. The apparatus comprises a camera, carried by the individual, for obtaining an image of a scene viewed by the individual; a display carried by the individual; an image modification input device carried by the individual; and a processor, carried by the individual. The processor modifies the image and displays the modified image on the display carried by the individual.
VISION-ASSIST DEVICES AND METHODS OF CALIBRATING VISION-ASSIST DEVICES
Vision-assist devices and methods for calibrating a position of a vision-assist device worn by a user are disclosed. In one embodiment, a method of calibrating a vision-assist device includes capturing a calibration image using at least one capturing device of the vision-assist device, obtaining at least one attribute of the calibration image, and comparing the at least one attribute of the calibration image with a reference attribute. The method further includes determining an adjustment of the at least one image sensor based at least in part on the comparison of the at least one attribute of the calibration image with the reference attribute, and providing an output corresponding to the determined adjustment of the vision-assist device.