G06V40/197

DETECTION OF ARTIFACTS IN MEDICAL IMAGES

There is provided a method of re-classifying a clinically significant feature of a medical image as an artifact, comprising: feeding a target medical image captured by a specific medical imaging sensor at a specific setup into a machine learning model, obtaining a target feature map as an outcome of the machine learning model, wherein the target feature map includes target features classified as clinically significant, analyzing the target feature map with respect to sample feature map(s) obtained as an outcome of the machine learning model fed a sample medical image captured by at least one of: the same specific medical imaging sensor and the same specific setup, wherein the sample feature map(s) includes sample features classified as clinically significant, identifying target feature(s) depicted in the target feature map having attributes matching sample feature(s) depicted in the sample feature map(s), and re-classifying the identified target feature(s) as an artifact.

Biometric pre-identification
11694204 · 2023-07-04 · ·

A station device in a biometric pre-identification system uses identity to perform one or more actions. Identities are determined (such as via a backend) using biometric information. A biometric pre-identification device obtains biometric information and/or a digital representation thereof from a person approaching the station device. The biometric pre-identification device transmits such to the station device, facilitating the station to begin and/or perform various actions. The station device begins or performs the actions using the identity determined based on the biometric information before the person arrives at the station device.

EYE TRACKING SYSTEM

An eye tracking system comprising a controller configured to receive a reference image of an eye of a user and a current image of the eye of the user. The controller is also configured to determine a difference between the reference image and the current image to define a differential image. The differential image has a two dimensional pixel array of pixel locations that are arranged in a plurality of rows and columns. Each pixel location has a differential intensity value. The controller is further configured to calculate a plurality of row values by combining the differential intensity values in corresponding rows of the differential image and to determine eyelid data based on the plurality of row values.

EYE TRACKING SYSTEM
20230005180 · 2023-01-05 ·

An eye tracking system provides a quality measure of a calculated gaze of a user. The eye tracking system receives gaze data including left eye gaze data associated with a left eye of the user and right eye gaze data associated with a right eye of the user. The eye tracking system compares the left eye gaze data and the right eye gaze data to determine a gaze difference value. The eye tracking system provides a gaze quality value of the gaze data based on the gaze difference value.

Unique patterns extracted from involuntary eye motions to identify individuals

A method for user authentication is disclosed including capturing involuntary eye movement of an eyeball of a user; generating a unique pattern to identify the user in response to the involuntary eye movement; storing the unique pattern into a secured non-volatile memory device; and authenticating the user with an electronic device in response to the stored unique pattern.

Image sensor with inside biometric authentication and storage

The present technology is to provide an image sensor capable of enhancing the security of biometric information and lowering the risk of information leakage. An image sensor 10 includes: a biometric information acquisition unit 102 that acquires biometric information; a storage unit 14 that stores reference information to be compared with the biometric information; and a biometric authentication unit 104 that performs biometric authentication by comparing the biometric information with the reference information. The image sensor 10 further includes an encryption processing unit 105 that encrypts biometric authentication information that authenticates a living organism.

Directional impression analysis using deep learning

Systems and techniques are provided for detecting gaze direction of subjects in an area of real space. The system receives a plurality of sequences of frames of corresponding fields of view in the real space. The system uses sequences of frames in a plurality of sequences of frames to identify locations of an identified subject and gaze directions of the subject in the area of real space over time. The system includes logic having access to a database identifying locations of items in the area of real space. The system identifies items in the area of real space matching the identified gaze directions of the identified subject.

System and method for alignment between real and virtual objects in a head-mounted optical see-through display

The present invention relates to a system for alignment between real and virtual objects in a head-mounted optical see-through display. In an embodiment, the system includes a tracking system including a processor, a headgear attached with the head-mounted optical see-through display, the 5 head-mounted optical see-through display includes at least two cameras mounted on a rigid frame, at least one object may be fixed or mobile including a plurality of marker points, the tracking system is operatively coupled to the headgear and the object, the processor is configured for: capturing two perspective images of the substantially circular entrance pupil of at least one 0 eye and relaying the image data to the processor, a memory device coupled to the processor and containing the geometric calibration data of the at least two cameras and the pre-calibrated transformation between the cameras. The processor extracts the boundary between the entrance pupil and the iris, calculates the projected center of the boundary in the individual images and 5 using the calibration data estimates the center of the entrance pupil in three dimensional space in relation to the cameras.

Image acquisition system for off-axis eye images

An image acquisition system determines first and second sets of points defining an iris-pupil boundary and an iris-sclera boundary in an acquired image; determines respective ellipses fitting the first and second sets of points; determines a transformation to transform one of the ellipses into a circle on a corresponding plane; using the determined transformation, transforms the selected ellipse into a circle on the plane; using the determined transformation, transforms the other ellipse into a transformed ellipse on the plane; determines a plurality of ellipses on the plane for defining an iris grid, by interpolating a plurality of ellipses between the circle and the transformed ellipse; moves the determined grid ellipses onto the iris in the image using an inverse transformation of the determined transformation; and extracts an iris texture by unwrapping the iris and interpolating image pixel values at each grid point defined along each of the grid ellipses.

Method and apparatus for turning on screen, mobile terminal and storage medium

Provided are a method and apparatus for turning on a screen, a mobile terminal and a storage medium. The method comprises that: when a change in a moving state of a mobile terminal meets a preset unlocking condition, a structured light image sensor is activated for imaging; a depth map obtained by the imaging of the structured light image sensor is acquired; a face depth model is constructed according to the depth map; a position of pupils is identified from the face depth model; and when the position of the pupils is within a specified region of eyes, the screen of the mobile terminal is controlled to turn on.