G06T2207/30201

Machine Learning Architecture for Imaging Protocol Detector

Systems and methods disclosed herein use a first machine learning architecture and a second machine learning architecture where the first machine learning architecture executes on a first processor and receives a first image representing a mouth of a user, determines user feedback for outputting to the user based on a first machine learning model, and outputs the user feedback for capturing a second image representing the mouth of the user. The second machine learning architecture executes on a second processor and receives the first image and the second image, and generates a 3D model of at least a portion of a dental arch of the user based on the first image and the second image where the 3D model is generated based on a second machine learning model of the second machine learning architecture.

STABILIZATION OF FACE IN VIDEO
20230049509 · 2023-02-16 ·

Placement of a face depicted within a video may be determined. One or more stabilization options for the video may be obtained. Stabilization option(s) may include angle stabilization option, a position stabilization option, and/or a size stabilization option. The video may be stabilized based on the placement of the face and the stabilization option(s).

METHOD AND SYSTEM FOR DETERMINING A FITTED POSITION OF AN OPHTHALMIC LENS WITH RESPECT TO A WEARER REFERENTIAL AND METHOD FOR DETERMINING A LENS DESIGN OF AN OPHTHALMIC LENS

A method for determining a fitted position of an ophthalmic lens to be mounted on a spectacle frame equipping a wearer, the fitted position being defined with respect to a wearer referential linked to the head of the wearer. The method includes defining at least one fitting criteria relating to the positioning of the ophthalmic lens with respect to the spectacle frame, determining frame 3D data at least partially representative of the geometry and position of the spectacle frame with respect to the wearer referential, determining lens 3D data at least partially representative of the geometry of at least a peripheral portion of the ophthalmic lens, and determining the fitted position of said ophthalmic lens with respect to the wearer referential using the frame 3D data and said lens 3D data to fit the ophthalmic lens within the spectacle frame meeting the fitting criteria.

METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM

A method and apparatus for processing an image signal, an electronic device, and a computer-readable storage medium. The method includes: obtaining a digital image signal of a target image, the target image including object imaging corresponding to an object, identifying a first area of the object imaging in the target image from the digital image signal, removing the object imaging from the target image based on the first area, to obtain a background image corresponding to an original background, performing image inpainting processing on the first area of the background image to obtain a filled image, the filled image including the original background and a perspective background connected to the original background, identifying a second area in the object imaging, and removing an imaging portion corresponding to the second area from the object imaging, and superimposing the obtained adjusted object imaging on the first area.

SYSTEMS AND METHODS FOR PROVIDING DISPLAYED FEEDBACK WHEN USING A REAR-FACING CAMERA

A system includes a processor and a non-transitory computer-readable medium containing instructions that when executed by the processor causes the processor to perform operations comprising displaying a prompt to a user of a mobile device on a display of a mobile device to capture an image representing at least a portion of a mouth of the user using a rear-facing camera of the mobile device, where the rear-facing camera is on an opposite side of the mobile device including the display. The operations further comprise controlling the rear-facing camera to enable the rear-facing camera to capture the image, receiving the image, and outputting, user feedback based on the image, where the user feedback is outputted on the display that is on the opposite side of the mobile device than the rear-facing camera.

DATA OBTAINING METHOD AND APPARATUS
20230052356 · 2023-02-16 · ·

A first frame of time of flight (TOF) data including projection off data and infrared data is obtained, and after determining that a data block satisfying that a number of data points with values greater than a first threshold is greater than a second threshold is present in the infrared data, TOF data for generating a first frame of a TOF image is obtained based on a difference between the infrared data and the projection off data. Because the data block satisfying the number of data points with values greater than the first threshold is greater than the second threshold is an overexposed data block, and the projection off data is TOF data acquired by a TOF camera with a TOF light source being off, the difference between the infrared data and the projection off data can correct the overexposure, improving quality of the first frame of the TOF image.

ASYMMETRIC FACIAL EXPRESSION RECOGNITION

The present disclosure describes techniques for facial expression recognition. A first loss function may be determined based on a first set of feature vectors associated with a first set of images depicting facial expressions and a first set of labels indicative of the facial expressions. A second loss function may be determined based on a second set of feature vectors associated with a second set of images depicting asymmetric facial expressions and a second set of labels indicative of the asymmetric facial expressions. The first loss function and the second loss function may be used to determine a maximum loss function. The maximum loss function may be applied during training of a model. The trained model may be configured to predict at least one asymmetric facial expression in a subsequently received image.

APPARATUS AND METHOD WITH IMAGE RECOGNITION-BASED SECURITY

An apparatus and a method with image recognition-based security are disclosed. The method includes, for an unlocked terminal, tracking a face detected in a previous frame, detecting a background region change between the previous frame and a current frame based on a region of the tracked face, when the background region change is not detected, determining whether a state maintenance time fails to meet a preset time, in response to the state maintenance time failing to meet the preset time, determining an operation mode to be a first operation mode for determining whether recognition succeeds for the current frame, performing the first operation mode, including performing face detection with respect to the current frame, and maintaining the unlocked state of the terminal for the current frame when the face is detected as a result of the performing of the face detection, representing that the recognition succeeded for the current frame.

SYSTEMS AND METHODS FOR EVALUATING HEALTH OUTCOMES
20230051436 · 2023-02-16 ·

A system and method for determining a health outcome, comprising: receiving first and second images or videos of a wound of a patient; comparing the images or videos to detect a characteristic of the wound, the characteristic including an identification of a change in the wound; receiving at least one non-image or non-video data input that includes data about the patient; executing a machine learning algorithm comprising a dataset of images or videos to analyze the identified change in the wound and to correlate at least one first image or video and at least one second image or video with the at least one non-image or non-video data input and to train the machine learning algorithm with the identification of a change in the wound; and generating a medical outcome prediction regarding a status and recovery of the patient in response to correlating the at least one additional input with the first and second images or videos.

Method and system for detecting physical presence

A method including providing a sensor device including one or several sensors. The sensor device is arranged to perform at least one high-power type measurement and at least one low-power type measurement and includes at least one image sensor arranged to depict a person by a measurement of said high-power type. Each of the low-power type measurements over time requires less electric power for operation as compared to each of the high-power type measurements. The method includes detecting a potential presence of the person using at least one of said low-power type measurements. The method includes producing, using one of the high-power type measurements, an image depicting a person and detecting a presence of the person based on im-age analysis of the image. The method includes detecting, using at least one of the low-power type measurements, a maintained presence of the person. The method includes failing to detect a maintained presence of the person.