G09B21/008

AUTOMATICALLY MODIFYING DISPLAY PRESENTATIONS TO PROGRAMMATICALLY ACCOMMODATE FOR VISUAL IMPAIRMENTS

Methods, apparatus, systems, computing devices, computing entities, and/or the like for identifying one or more visual impairments of a user, mapping the visual impairments to one or more accessibility solutions, (e.g., program code entries) and dynamically modifying a display presentation based at least in part on the identified accessibility solutions.

Systems for augmented reality visual aids and tools
11676352 · 2023-06-13 · ·

Adaptive Control Driven System/ACDS 99, supports visual enhancement, mitigation of challenges and with basic image modification algorithms and any known hardware from contact lenses to IOLs to AR hardware glasses, and enables users to enhance vision with user interface based on a series of adjustments that are applied to move, modify, or reshape image sets and components with full advantage of the remaining useful retinal area, thus addressing aspects of visual challenges heretofore inaccessible by devices which learn needed adjustments.

Viewing device

An apparatus to capture and display an image of an object includes a frame, at least one optical sensor for capturing the image of the object, means for moving the frame relative to a surface of the object in a first direction, and means for moving the optical sensor relative to the frame in a second direction.

Spatial weather map for the visually impaired

A method, computer system, and computer program product for providing spatial weather map data to the visually impaired are provided. The embodiments may include receiving a weather map image that contains weather information and a subject of interest, wherein the subject of interest comprises a user current location, a business location, or an asset location. The embodiments may also include receiving a request from a user for current time, historical or forecast data depicted in the weather map and a request for a geo-location of the subject of interest. The embodiments may further include generating sounds corresponding with the requested weather map data, wherein the generated sounds appear to the user to originate in a particular direction and at a particular distance from the subject of interest, wherein the apparent location of the sounds correspond with a weather map feature including the location of approaching or surrounding weather.

SMART EYEGLASSES FOR SPECIAL NEEDS CHILDREN AND ADULTS
20230169791 · 2023-06-01 ·

A system that detects whether a user interacting with a featured activity is wearing glasses is described. The system verifies that the user is wearing the glasses and the system prompts the user and a caregiver and may blur, stop or otherwise interrupt a user experience of a featured activity, such as a video game or film, when the system determined that the user is not wearing the glasses. A glasses module may be positioned at frame of the glasses at a head of the user to detect that the user is wearing the glasses. Optical facial processing may detect a face and glasses on the face. Also disclosed is a hearing aid that may be integrated with such a system. A glasses module that aids in depth perception by reporting distance ahead, and a system that trains eye contact with another person wearing glasses are also disclosed.

INTRAOCULAR LASER PROJECTION SYSTEM
20230168489 · 2023-06-01 ·

An implant that is to be implanted inside the eye of a person contains a laser projection scanning subsystem that is configured to “paint” an image of the scene that is before the person, on the retina. The image of the scene may be acquired by a digital camera that is attached to a head unit that may be worn by the person, and then transmitted to the implant. Other aspects are also described and claimed.

Intelligent glasses for the visually impaired

An approach for communicating navigation information on a physical environment to a user. The approach includes a computer receiving digital images of a physical environment of the user captured by digital video devices and converting the digital images into a three-dimensional image. The approach includes the computer analyzing the three-dimensional image using object analysis to generate output data, wherein the output data corresponds to the physical environment and determining at least one device associated with the user. The approach includes the computer formatting the output data for use with at least one device, wherein the at least one device is capable of providing to the user a spatial map of the physical environment created by an electrical stimulation pad, and receiving a touch from a user on a surface of the electrical stimulation pad corresponding to an object represented in the spatial map of the physical environment.

Saliency-based apparatus and methods for visual prostheses

The present invention relates to a saliency-based apparatus and methods for visual prostheses. A saliency-based component processes video data output by a digital signal processor before the video data are input to the retinal stimulator. In a saliency-based method, an intensity stream is extracted from an input image, feature maps based on the intensity stream are developed, plural most salient regions of the input image are detected and one of the regions is selected as a highest saliency region.

APPARATUS AND METHOD FOR PRINTING STEGANOGRAPHY TO ASSIST VISUALLY IMPAIRED

An apparatus and method for printing steganography is disclosed. The apparatus comprises a wearable unit and a controlling unit that are programmatically controlled by a processor. The controlling unit converts content to be published into a phoneme transcription for a target language, processes the content and further arranges the processed content as per a specified page layout. Further, the phoneme transcribed content is embedded into a QR code that is again extracted by a wearable unit to read the content, the content layout, and reading sequence. The wearable unit also converts the phoneme transcribed content into a voice output. Further, the wearable unit comprises motor sensors to sense a readers' body and neck movements to guide a reader to read content in a correct manner such that voice output is programmatically paused, stopped, repeated at intervals so determined.

BLINDNESS ASSIST GLASSES
20220366690 · 2022-11-17 ·

An eyewear device with camera-based compensation that improves the user experience for user's having partial blindness or complete blindness. The camera-based compensation determines objects, converts determined objects to text, and then converts the text to audio that is indicative of the objects and that is perceptible to the eyewear user. The camera-based compensation may use a region-based convolutional neural network (RCNN) to generate a feature map including text that is indicative of objects in images captured by a camera. Relevant text of the feature map is then processed through a text to speech algorithm featuring a natural language processor to generate audio indicative of the objects in the processed images.