Patent classifications
G09B21/04
Method of interactive reading for users of self-scrolling Braille
Electronically displayed Braille dots are laterally propagated against a stationary finger resting on a stationary base for reading Braille. The lateral propagation takes the form of a transverse wave of pins which are raised and lowered in sequence. The reading can be synchronized with other processes or events under computer controls. A method of interactive reading is provided whereby the reading of Braille from the display is computer-synchronized with other events and processes to help users learn Braille, to monitor physiological responses to reading, and to enhance a user's reading experience.
Neural network model for generation of compressed haptic actuator signal from audio input
A method comprises inputting an audio signal into a machine learning circuit to compress the audio signal into a sequence of actuator signals. The machine learning circuit being trained by: receiving a training set of acoustic signals and pre-processing the training set of acoustic signals into pre-processed audio data. The pre-processed audio data including at least a spectrogram. The training further includes training the machine learning circuit using the pre-processed audio data. The neural network has a cost function based on a reconstruction error and a plurality of constraints. The machine learning circuit generates a sequence of haptic cues corresponding to the audio input. The sequence of haptic cues is transmitted to a plurality of cutaneous actuators to generate a sequence of haptic outputs.
Neural network model for generation of compressed haptic actuator signal from audio input
A method comprises inputting an audio signal into a machine learning circuit to compress the audio signal into a sequence of actuator signals. The machine learning circuit being trained by: receiving a training set of acoustic signals and pre-processing the training set of acoustic signals into pre-processed audio data. The pre-processed audio data including at least a spectrogram. The training further includes training the machine learning circuit using the pre-processed audio data. The neural network has a cost function based on a reconstruction error and a plurality of constraints. The machine learning circuit generates a sequence of haptic cues corresponding to the audio input. The sequence of haptic cues is transmitted to a plurality of cutaneous actuators to generate a sequence of haptic outputs.
Systems and methods for generating tactile 3D maps
A method for generating three dimensional indicators for the visually impaired. The method includes selecting one or more pre-designed symbols from a plurality of pre-designed symbols. The pre-designed symbols represent standard building layout features and are sized to be readable via a physical touch. The method further includes inserting the one or more selected symbols into a two dimensional digital layout, and generating one or more of an orientation object and a legend object into the two dimensional digital layout. The method further includes converting the two dimensional digital layout into a three dimensional digital model, and generating an output file including the three dimensional model in a format compatible with a three dimensional printing device.
Systems and methods for generating tactile 3D maps
A method for generating three dimensional indicators for the visually impaired. The method includes selecting one or more pre-designed symbols from a plurality of pre-designed symbols. The pre-designed symbols represent standard building layout features and are sized to be readable via a physical touch. The method further includes inserting the one or more selected symbols into a two dimensional digital layout, and generating one or more of an orientation object and a legend object into the two dimensional digital layout. The method further includes converting the two dimensional digital layout into a three dimensional digital model, and generating an output file including the three dimensional model in a format compatible with a three dimensional printing device.
COMPUTER VISION METHODS AND SYSTEMS FOR SIGN LANGUAGE TO TEXT/SPEECH
A method for converting a digital image comprising a sign-language sign to a text or computer-generated speech: obtaining a web camera stream of a sign-language sign; breaking down the one or more digital video images into a set of singular frames; for each singular frame of the set of singular frames, convert the digital image in the singular frame to an imaging library image; providing a machine-learned model; feeding the digital image into the machine-learned model; adding a sequential layer onto the machine-learned model, wherein the sequential layer comprises a first linear drop model to prevent loss from increasing throughout a training process, and wherein the sequential layer comprises a second linear model used to reduce a loss down to a specified number of output classes; for each digital image: resizing the digital image to two-hundred and twenty-four (224) by two-hundred and twenty-four (224) pixels, scaling down the digital image, removing each border of the digital image, and randomly rotating the digital image to create a modified digital image; inputting the modified digital image input into a tensor; and using the tensor to train the machine-learning model to recognize the sign-language sign.
COMPUTER VISION METHODS AND SYSTEMS FOR SIGN LANGUAGE TO TEXT/SPEECH
A method for converting a digital image comprising a sign-language sign to a text or computer-generated speech: obtaining a web camera stream of a sign-language sign; breaking down the one or more digital video images into a set of singular frames; for each singular frame of the set of singular frames, convert the digital image in the singular frame to an imaging library image; providing a machine-learned model; feeding the digital image into the machine-learned model; adding a sequential layer onto the machine-learned model, wherein the sequential layer comprises a first linear drop model to prevent loss from increasing throughout a training process, and wherein the sequential layer comprises a second linear model used to reduce a loss down to a specified number of output classes; for each digital image: resizing the digital image to two-hundred and twenty-four (224) by two-hundred and twenty-four (224) pixels, scaling down the digital image, removing each border of the digital image, and randomly rotating the digital image to create a modified digital image; inputting the modified digital image input into a tensor; and using the tensor to train the machine-learning model to recognize the sign-language sign.
TESTING COMPUTER PROGRAM ACCESSIBILITY FOR USERS WITH DISABILITIES, SUCH AS FOR USE WITH MOBILE PHONES
Disclosed here is a system to enable interaction between a user with a disability and a computer program. The system can obtain a representation of a user interface to present to a user. The system can determine an element associated with the user interface, where the element is configured to provide information to the user, however, the user interface presentation of the element at least partially fails to provide the information to the user. Based on the element, the system can determine an appropriate test to perform. The appropriate test indicates at least two of: a test to perform with a keyboard, a gesture test to perform with a mobile screen reader, and an audio test to perform with a screen reader. The system can generate an indication of the appropriate test. The system can provide the indication of the appropriate test prior to releasing the user interface to the user.
NEURAL NETWORK MODEL FOR GENERATION OF COMPRESSED HAPTIC ACTUATOR SIGNAL FROM AUDIO INPUT
A method comprises inputting an audio signal into a machine learning circuit to compress the audio signal into a sequence of actuator signals. The machine learning circuit being trained by: receiving a training set of acoustic signals and pre-processing the training set of acoustic signals into pre-processed audio data. The pre-processed audio data including at least a spectrogram. The training further includes training the machine learning circuit using the pre-processed audio data. The neural network has a cost function based on a reconstruction error and a plurality of constraints. The machine learning circuit generates a sequence of haptic cues corresponding to the audio input. The sequence of haptic cues is transmitted to a plurality of cutaneous actuators to generate a sequence of haptic outputs.
NEURAL NETWORK MODEL FOR GENERATION OF COMPRESSED HAPTIC ACTUATOR SIGNAL FROM AUDIO INPUT
A method comprises inputting an audio signal into a machine learning circuit to compress the audio signal into a sequence of actuator signals. The machine learning circuit being trained by: receiving a training set of acoustic signals and pre-processing the training set of acoustic signals into pre-processed audio data. The pre-processed audio data including at least a spectrogram. The training further includes training the machine learning circuit using the pre-processed audio data. The neural network has a cost function based on a reconstruction error and a plurality of constraints. The machine learning circuit generates a sequence of haptic cues corresponding to the audio input. The sequence of haptic cues is transmitted to a plurality of cutaneous actuators to generate a sequence of haptic outputs.