A61F9/08

Systems and methods for patient alignment and treatment
11259962 · 2022-03-01 · ·

A system for supporting and aligning a patient during a color alteration procedure includes a laser system that delivers a laser in a first direction. A control computer may be adjacent the laser system for controlling the laser system. The control computer system may include a user interface in a first plane substantially perpendicular to the first direction. The system may include a patient support structure having a patient support surface extending in a second direction substantially perpendicular to the first direction and configured to be adjustable to set a patient position or alignment relative to the laser system. Coarse adjustment hardware may be configured to cause automated and/or manual adjustments to the patient support surface in the first direction. Fine adjustment hardware may be configured to cause automated fine adjustments to the patient support surface in the first direction based on instructions received from the control computer.

System and method for alerting visually impaired users of nearby objects
09801778 · 2017-10-31 · ·

A system and method for assisting a visually impaired user including an imaging device, a processing unit for receiving images from the imaging device and converting the images into signals for use by one or more controllers, and one or more vibro-tactile devices, wherein the one or more controllers activates one or more of the vibro-tactile devices in response to said signals received from the processing unit. The system preferably includes a lanyard to be worn around the neck of the user such that a first vibro-tactile device is arranged on the right side of the user's neck, a second vibro-tactile device on a left side of the user's neck, and a third vibro-tactile device at the back portion of the user's neck. The vibro-tactile devices are activated depending on a determined position in front of the user of an object and the distance from the user to the object.

Saliency-based apparatus and methods for visual prostheses

The present invention relates to a saliency-based apparatus and methods for visual prostheses. A saliency-based component processes video data output by a digital signal processor before the video data are input to the retinal stimulator. In a saliency-based method, an intensity stream is extracted from an input image, feature maps based on the intensity stream are developed, plural most salient regions of the input image are detected and one of the regions is selected as a highest saliency region.

Saliency-based apparatus and methods for visual prostheses

The present invention relates to a saliency-based apparatus and methods for visual prostheses. A saliency-based component processes video data output by a digital signal processor before the video data are input to the retinal stimulator. In a saliency-based method, an intensity stream is extracted from an input image, feature maps based on the intensity stream are developed, plural most salient regions of the input image are detected and one of the regions is selected as a highest saliency region.

AUGMENTED IMAGING ASSISTANCE FOR VISUAL IMPAIRMENT

Systems, apparatuses, services, platforms, and methods are discussed herein that provide assistance for user interface devices. In one example, an assistance application is provided comprising an imaging system configured to capture an image of a scene, an interface system configured to provide data associated with the image to a distributed assistance service that responsively processes the data to recognize properties of the scene and establish feedback for a user based at least on the properties of the scene, and a user interface configured to provide the feedback to the user.

Machine vision with dimensional data reduction
11430263 · 2022-08-30 · ·

A method is described that includes receiving raw image data corresponding to a series of raw images, and processing the raw image data with an encoder of a processing device to generate encoded data. The encoder is characterized by an input/output transformation that substantially mimics the input/output transformation of at least one retinal cell of a vertebrate retina. The method also includes processing the encoded data to generate dimension reduced encoded data by applying a dimension reduction algorithm to the encoded data. The dimension reduction algorithm is configured to compress an amount of information contained in the encoded data. An apparatus and system usable with such a method is also described.

Machine vision with dimensional data reduction
11430263 · 2022-08-30 · ·

A method is described that includes receiving raw image data corresponding to a series of raw images, and processing the raw image data with an encoder of a processing device to generate encoded data. The encoder is characterized by an input/output transformation that substantially mimics the input/output transformation of at least one retinal cell of a vertebrate retina. The method also includes processing the encoded data to generate dimension reduced encoded data by applying a dimension reduction algorithm to the encoded data. The dimension reduction algorithm is configured to compress an amount of information contained in the encoded data. An apparatus and system usable with such a method is also described.

DETERMINATION OF PARAMETER VALUES FOR SENSORY SUBSTITUTION DEVICES

The present disclosure provides a computer-implemented method for representing intensity levels indicative of a first type of sense of a subject (150) by parameter values for a different second type of sense of the subject (150). The method comprises determining (210) a first parameter value for the second type of sense representing a first intensity level indicative of the first type of sense; and determining (220) a second parameter value for the second type of sense representing a second intensity level indicative of the first type of sense with reference to the first parameter value, wherein the first parameter value differs from the second parameter value by at least one Just-Noticeable-Difference (JND) of the second type of sense of the subject (150).

DETERMINATION OF PARAMETER VALUES FOR SENSORY SUBSTITUTION DEVICES

The present disclosure provides a computer-implemented method for representing intensity levels indicative of a first type of sense of a subject (150) by parameter values for a different second type of sense of the subject (150). The method comprises determining (210) a first parameter value for the second type of sense representing a first intensity level indicative of the first type of sense; and determining (220) a second parameter value for the second type of sense representing a second intensity level indicative of the first type of sense with reference to the first parameter value, wherein the first parameter value differs from the second parameter value by at least one Just-Noticeable-Difference (JND) of the second type of sense of the subject (150).

Augmented Reality Panorama Systems and Methods

There is presented a system and method for providing real-time object recognition to a remote user. The system comprises a portable communication device including a camera, at least one client-server host device remote from and accessible by the portable communication device over a network, and a recognition database accessible by the client-server host device or devices. A recognition application residing on the client-server host device or devices is capable of utilizing the recognition database to provide real-time object recognition of visual imagery captured using the portable communication device to the remote user of the portable communication device. In one embodiment, a sighted assistant shares an augmented reality panorama with a visually impaired user of the portable communication device where the panorama is constructed from sensor data from the device.