G06V40/193

Determining features of a user's eye from depth mapping of the user's eye via indirect time of flight

An eye monitoring system is included in a headset of a virtual reality system or of an augmented reality system. The eye monitoring system determines distances between the eye monitoring system and portions of a user's eye enclosed by the headset. The eye monitoring system projects a temporally periodic pattern of light onto the user's eye via a sensor. The eye monitoring system determines a distance between the eye monitoring system and locations of the user's eye based on a phase shift of the periodic pattern of light captured by each pixel of the sensor. From the determined distances, the eye monitoring system determines features of the user's eye.

METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR MAPPING A VISUAL FIELD

For measuring quality of view over a visual field of view of an eye, during a measuring period, deviations between gaze positions and associated stimulus positions where the stimulus to be followed was displayed when the gaze position was detected and magnitudes of the registered deviations arte determined. For each field portion of a map of a visual field of view, quality of view is determined in accordance with a quality of view estimates of associated ones of the registered deviations of which the associated stimulus positions are located relative to the gaze position so that the associated stimulus positions are in that field portion. For each of the associated ones of the registered deviations, the quality of view is estimated in accordance with the magnitude of that associated one of the registered deviations and with magnitudes of at least preceding or succeeding ones of the registered deviations.

METHOD FOR GENERATING A VIDEO COMPRISING BLINK DATA OF A USER VIEWING A SCENE
20230237846 · 2023-07-27 ·

A method performed by a computer for generating a video comprising blink data of a user viewing a scene depicted as video data, where the blink data is overlayed on the video data. The method includes receiving sensor data. The sensor data includes at least the video data including at least one video frame, and gaze tracking data at least indicative of viewed positions within the scene depicted by at least one video frame of the video data. The method includes processing the sensor data to generate blink data indicative of blink motion of at least one eye of the user. The method includes generating a video overlay by rendering the blink data. generating an output video by mixing the video data and the video overlay.

OPHTHALMIC IMAGE PROCESSING METHOD, OPHTHALMIC IMAGE PROCESSING DEVICE, AND OPHTHALMIC IMAGE PROCESSING PROGRAM
20230025493 · 2023-01-26 · ·

An ophthalmic image of an evaluation target is acquired, a plurality of subsection images is extracted from the ophthalmic image, a state of a subject's eye is predicted for each of the subsection images based on a learned model in which learning has been performed in advance regarding extracting a plurality of subsection images from an ophthalmic image for learning, and predicting a state of a subject's eye for the each of subsection image by machine learning using correct answer data related to a state of each subsection image, and the subsection image is extracted from the ophthalmic image so as to have an image size corresponding to a state of a subject's eye of an evaluation target.

QUANTITATIVE ANALYSIS METHOD AND SYSTEM FOR ATTENTION BASED ON LINE-OF-SIGHT ESTIMATION NEURAL NETWORK

Embodiments of the present disclosure provide a quantitative method and system for attention based on a line-of-sight estimation neural network, which improves the stability and training efficiency of the line-of-sight estimation neural network. A few-sample learning method is applied to training of the line-of-sight estimation neural network, which improves generalization performance of the line-of-sight estimation neural network. A nonlinear division method for small intervals of angles of the line of sight is provided, which reduces an estimation error of the line-of-sight estimation neural network. Eye opening and closing detection is added to avoid the line-of-sight estimation error caused by an eye closing state. A method for solving a landing point of the line of sight is provided, which has high environmental adaptability and can be quickly used in actual deployment.

SMARTPHONE-BASED DIGITAL PUPILLOMETER

In some embodiments, techniques for using machine learning to enable visible light pupilometry are provided. In some embodiments, a smartphone may be used to create a visible light video recording of a pupillary light reflex (PLR). A machine learning model may be used to detect a size of a pupil in the video recording over time, and the size over time may be presented to a clinician. In some embodiments, a system that includes a smartphone and a box that holds the smartphone in a predetermined relationship to a subject's face is provided. In some embodiments, a sequential convolutional neural network architecture is used. In some embodiments, a fully convolutional neural network architecture is used.

Multivariate and multi-resolution retinal image anomaly detection system

Machine learning technologies are used to identify and separating abnormal and normal subjects and identifying possible disease types with images (e.g., optical coherence tomography (OCT) images of the eye), where the machine learning technologies are trained with only normative data. In one example, a feature or a physiological structure of an image is extracted, and the image is classified based on the extracted feature. In another example, a region of the image is masked and then reconstructed, and a similarity is determined between the reconstructed region and the original region of the image. A label (indicating an abnormality) and a score (indicating a severity) can be determined based on the classification and/or the similarity.

Modification profile generation for vision defects related to double vision or dynamic aberrations
11701000 · 2023-07-18 · ·

In certain embodiments, double-vision-related vision defects determinations or modifications may be facilitated. In some embodiments, a stimulus may be to be presented at a first time at a position on a first display for a deviating eye of a user (e.g., without a stimulus being presented on a second display of for a reference eye of the user) to cause the deviating eye to fixate on the position on the first display. A deviation measurement for the deviating eye may be determined based on an amount of movement of the deviating eye occurring upon the presentation on the first display for the deviating eye at the first time. In some embodiments, a modification profile associated with the user may be determined based on the deviation measurement, where the modification profile includes one or more modification parameters to be applied to modify an image for the user.

TEST SUPPORT METHOD, TEST SUPPORT DEVICE, AND STORAGE MEDIUM

A test support method includes a step of obtaining a pre-change image and a post-change image to be displayed on a monitoring and control system, a step of extracting, from the post-change image, multiple symbols that have changed from corresponding symbols in the pre-change image, a step of adding order information to the multiple symbols extracted, and a step of outputting a test image in which the order information is added to the multiple symbols.

METHOD AND SYSTEM FOR PERSONALIZED EYE BLINK DETECTION

Unlike state of art eye blink detection techniques that are generalized for usage across individuals affecting accuracy of eye blink prediction from subject to subject, embodiments of the present disclosure provide a method and system for personalized eye blink detection using passive camera-based approach. The method first generates a subject specific annotation data, which is then further processed to derive subject specific personalized blink threshold values. The method disclosed provides three unique approaches to compute the personalized blink threshold values which is one time calibration process. The personalized blink threshold values are then used to generate a binary decision vector (D) while analyzing input test images (video sequences) of the subject of interest. Further, values taken by elements of the decision vector (D) are analyzed for a predefined time period to predict possible eye blinks of the subject.