Patent classifications
G06T2207/30041
METHOD AND SYSTEM FOR DETERMINING A FITTED POSITION OF AN OPHTHALMIC LENS WITH RESPECT TO A WEARER REFERENTIAL AND METHOD FOR DETERMINING A LENS DESIGN OF AN OPHTHALMIC LENS
A method for determining a fitted position of an ophthalmic lens to be mounted on a spectacle frame equipping a wearer, the fitted position being defined with respect to a wearer referential linked to the head of the wearer. The method includes defining at least one fitting criteria relating to the positioning of the ophthalmic lens with respect to the spectacle frame, determining frame 3D data at least partially representative of the geometry and position of the spectacle frame with respect to the wearer referential, determining lens 3D data at least partially representative of the geometry of at least a peripheral portion of the ophthalmic lens, and determining the fitted position of said ophthalmic lens with respect to the wearer referential using the frame 3D data and said lens 3D data to fit the ophthalmic lens within the spectacle frame meeting the fitting criteria.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
An information processing apparatus according to an embodiment of the present technology includes a line-of-sight estimator, a correction amount calculator, and a registration determination section. The line-of-sight estimator calculates an estimation vector obtained by estimating a direction of a line of sight of a user. The correction amount calculator calculates a correction amount related to the estimation vector on the basis of at least one object that is within a specified angular range that is set using the estimation vector as a reference. The registration determination section determines whether to register, in a data store, calibration data in which the estimation vector and the correction amount are associated with each other, on the basis of a parameter related to the at least one object within the specified angular range.
Systems And Methods For Optical Evaluation Of Pupillary Psychosensory Responses
The present disclosure is directed to systems and methods for measuring and analyzing pupillary psychosensory responses. An electronic device is configured to receive video data with at least two frames. The electronic device then locates one or more eye objects in the video data and determine pupil and iris sizes of the one or more eye objects. The electronic device determines the pupillary psychosensory responses of the one or more eye objects by tracking a ratio of pupil diameter to iris diameter throughout the video. Several metrics for the pupillary psychosensory responses can be determined (e.g., velocity of change of the ratio, peak to peak amplitude of the change in ratio over time, etc.). These metrics can be used as measures of an individual's cognitive ability and mental health in a single session or tracked throughout multiple sessions.
Biomarker Prediction Using Optical Coherence Tomography
Deep learning methods and systems for detecting biomarkers within optical coherence tomography volumes using such deep learning methods and systems are provided. Embodiments predict the presence or absence of clinically useful biomarkers in OCT images using deep neural networks. The lack of available training data for canonical deep learning approaches is overcome in embodiments by leveraging a large external dataset consisting of foveal scans using transfer learning. Embodiments represent the three-dimensional OCT volume by “tiling” each slice into a single two dimensional image, and adding an additional component to encourage the network to consider local spatial structure. Methods and systems, according to embodiments are able to identify the presence or absence of AMD-related biomarkers on par with clinicians. Beyond identifying biomarkers, additional models could be trained, according to embodiments, to predict the progression of these biomarkers over time.
TECHNIQUES FOR QUANTITATIVELY ASSESSING TEAR-FILM DYNAMICS
Aspects of the present disclosure provide techniques for quantitatively assessing tear-film dynamics associated with contact lenses. An example method includes projecting an image of one or more shapes on a tear film surface of the contact lens worn on the eye, capturing video data, comprising a plurality of image frames, of the one or more shapes projected on the tear film surface of the contact lens over a period of time, performing image segmentation on a plurality of reflection patterns included in the plurality of image frames, generating a plurality of maps of the tear film surface of the contact lens indicating changes to the tear film surface of the contact lens during the period of time, and outputting, based on the plurality of maps, one or more metrics quantifying the changes to the tear film surface of the contact lens over the period of time.
APPARATUS AND METHOD FOR PREDICTING BIOMETRICS BASED ON FUNDUS IMAGE
Provided are apparatus and method for predicting biometrics using a fundus image. The method for predicting biometrics using a fundus image includes steps of preparation of a plurality of learning fundus images, generation of a learning model for predicting corresponding biometrics using the prepared data based on at least one characteristic of the fundus reflected in the prepared plurality of learning fundus images, reception of a prediction target of fundus image, and prediction of the biometrics of the subject of the prediction target of fundus image by using the generated learning model.
Ultrasound diagnostic apparatus and method for controlling ultrasound diagnostic apparatus
An ultrasound diagnostic apparatus 1 includes an image acquisition unit 8 that transmits an ultrasound beam from an ultrasound probe 18 to a subject to acquire an ultrasound image, an optic nerve recognition unit 9 that performs image analysis on the ultrasound image acquired by the image acquisition unit 8 to recognize an optic nerve of the subject, an optic nerve evaluation unit 10 that evaluates a shape of the optic nerve of the subject recognized by the optic nerve recognition unit 9 on the basis of an anatomical structure, and an operation guide unit 12 that guides a user to operate the ultrasound probe 18 so as to acquire an ultrasound image for measurement of the optic nerve of the subject on the basis of an evaluation result obtained by the optic nerve evaluation unit 10.
Image Syntheis Method, Electronic Device, and Non-Transitory Computer-Readable Storage Medium
Disclosed are an image synthesis method and a related apparatus, the method includes: determining a target area on a preview image by means of eyeball tracking technology; on the basis of brightness parameters of the target area, determining multiple exposure parameter sets; setting a camera module with the multiple exposure parameter sets to acquire multiple reference images, each reference image corresponding to a different exposure parameter set; and synthesizing the multiple reference images to obtain a target image.
A SYSTEM AND METHOD FOR CLASSIFYING IMAGES OF RETINA OF EYES OF SUBJECTS
The invention relates to a computing system and a computer-implemented method for classifying images of retina of eyes of subjects. A captured image of a retina is processed to obtain a plurality of different segmented images each having different selected portions of the captured image using different selection rules. The multiple segmented images are provided to respective dedicated machine learning models to output an image classification based on the respective segmented images provided as input. An ensemble classification is determined based on the multiple classifications obtained by means of the multiple trained machine learning models.
IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND PROGRAM
An image processing method performed by a processor and including detecting positions of plural vortex veins in a fundus image of an examined eye, and computing a center of distribution of the plural detected vortex vein positions.