Patent classifications
G09B21/008
Automatically modifying display presentations to programmatically accommodate for visual impairments
Methods, apparatus, systems, computing devices, computing entities, and/or the like for identifying one or more visual impairments of a user, mapping the visual impairments to one or more accessibility solutions, (e.g., program code entries) and dynamically modifying a display presentation based at least in part on the identified accessibility solutions.
System and Method of Managing a Lottery Service for Visually-Impaired Users
A system and a method of managing a lottery service for visually-impaired users allow users to read a lottery ticket with a plurality of braille-inscribed ticket numbers. The system includes a PC device, at least one remote server, at least one physical lottery ticket, and at least one external server. The method begins by scanning the braille-inscribed ticket numbers off the physical lottery ticket with the PC device. The braille-inscribed numbers are converted into a plurality of digital ticket numbers with the PC device. The digital ticket numbers are relayed from the PC device to the remote server. A plurality of winning numbers is then received for the lottery service from the external server with the remote server. If the digital ticket numbers match the plurality of winning numbers, a lottery winning notification is generated with the remote server and is then outputted with the PC device.
SELF-CENTERING USER INTERFACE FOR INPUTTING INFORMATION
Techniques described herein are directed to, among other things, utilizing a self-centering user interface to receive information associated with a transaction. For instance, a computing device may receive a first input at a first location of a display. The computing device may then determine a positioning for the user interface, where the user interface may be substantially centered about the first location. In some instances, the computing device may display the user interface using the positioning. The computing device may then receive a second input corresponding to swipe from the first location of the display to a second location of the display. The computing device may then determine a symbol included in the user interface based at least in part on the second input. In some instances, the user interface includes a keypad for entering a personal identification number associated with a payment instrument.
Visual aid device and visual aid method by which user uses visual aid device
A vision assistance apparatus may include an image acquisition unit configured to acquire an image by capturing the scene of the front which a user watches, a sensor unit configured to acquire sensing information on objects located in front of the user, a control unit configured to analyze the image acquired by the image acquisition unit and generate a notification signal for the front scene through an analysis result of the image and the sensing information acquired by the sensor unit, and an output unit configured to provide the user with the notification signal generated by the control unit in the form of sound.
Digital display device comprising a complementary light field display or display portion, and vision correction system and method using same
Described are various embodiments of a digital display device to render an image for viewing by a viewer having reduced visual acuity, the device comprising: a digital display medium for rendering the image based on pixel data related thereto; a complementary light field display portion; and a hardware processor operable on said pixel data for a selected portion of the image to be rendered via said complementary light field display portion so to produce vision-corrected pixel data corresponding thereto to at least partially address the viewer's reduced visual acuity when viewing said selected portion as rendered in accordance with said vision-corrected pixel data by said complementary light field display portion.
Smart eyeglasses for special needs children and adults
A system that detects whether a user interacting with a featured activity is wearing glasses is described. The system verifies that the user is wearing the glasses and the system prompts the user and a caregiver and may blur, stop or otherwise interrupt a user experience of a featured activity, such as a video game or film, when the system determined that the user is not wearing the glasses. A glasses module may be positioned at frame of the glasses at a head of the user to detect that the user is wearing the glasses. Optical facial processing may detect a face and glasses on the face. Also disclosed is a hearing aid that may be integrated with such a system. A glasses module that aids in depth perception by reporting distance ahead, and a system that trains eye contact with another person wearing glasses are also disclosed.
SYSTEMS AND METHODS FOR COMMUNICATING WITH VISION AND HEARING IMPAIRED VEHICLE OCCUPANTS
Systems and methods associated with a vehicle are provided. The systems and methods include an occupant output system including an output device, a camera or other perception device, and a processor in operable communication with the occupant output system and the camera or other perception device. The processor is configured to execute program instructions to cause the processor to: receive image or other perception data from the camera or other perception device, the image or other perception data including at least part of a head and/or body of an occupant of the vehicle, analyze the image or other perception data to determine if the occupant is of hearing and vision impaired, when the occupant is determined to be of vision and hearing impaired, decide on an output modality to assist the occupant, and generate an output for the occupant on the output device, and in the output modality.
ENABLING THE VISUALLY IMPAIRED WITH AR USING FORCE FEEDBACK
A system and method provide feedback to a user, such as a visually impaired user, to guide the user to an object in the field of view of a camera mounted on a frame worn on the head of the user. A processor identifies at least one object and a body part of the user in the field of view of the camera and tracks relative positions of the body part relative to the identified object. The processor also generates and communicates at least one control signal for guiding the body part of the user to the identified object to a user feedback device worn on or adjacent the body part of the user. The feedback device receives the control signal(s) and converts the control signal(s) into at least one of sounds or haptic feedback that guides the body part to the identified object.
SYSTEMS AND METHODS FOR ACCESSIBLE COMPUTER-USER INTERACTIONS
Implementations described herein relate to methods, systems, and computer-readable media for accessible computer-user interactions. For example, a method can include displaying a graphical user interface on a display screen. The graphical user interface includes a virtual assessment, the virtual assessment is representative of an assessment examination, and the graphical user interface further comprises a graphical element that represents a portion of a logical problem of the assessment examination. The method can also include receiving a signal indicative of placement of a physical object onto the display screen above the graphical element, repositioning the graphical element responsive to physical movement of the physical object on the display screen, and generating a portion of an assessment score based at least in part on one or more of: physical movement of the physical object on the display screen, the signal, or the repositioning of the graphical element.
Providing enhanced images for navigation
Systems and methods relating to displaying images are disclosed. In one embodiment, sensor data is received via one or more sensors of a wearable head device comprising a display, the sensor data indicative of a surrounding environment of a user of the wearable head device. An image can be determined based on the sensor data, the image corresponding to the surrounding environment. A visibility of a first portion of the image corresponding to a first portion of the surrounding environment can be enhanced. Enhancing a visibility of a second portion of the image corresponding to a second portion of the surrounding environment can be forgone. The enhanced first portion of the image and a view of the second portion of the surrounding environment can be presented concurrently via the display of the wearable head device.