G06V40/171

Driving support apparatus and driving support method

The driving support apparatus includes a memory configured to store information representing a degree of familiarity with an environment for a driver of a vehicle; and a processor configured to detect an object existing around the vehicle based on a sensor signal representing a situation around the vehicle obtained by a sensor mounted on the vehicle, determine whether or not the object approaches the vehicle so that the object may collide with the vehicle, and notify the driver of the approach via a notification device mounted on the vehicle at a timing corresponding to the degree of familiarity with the environment for the driver of the vehicle, when it is determined that the object approaches the vehicle so that the object may collide with the vehicle.

Pet safety management method and system, computer equipment and storage medium

The present application relates to a pet safety management method and system, computer equipment and a storage medium. The method includes: acquiring a first video comprising a target pet and a target object, the target object being an active object except the target pet; analyzing the first video to determine a first state of the target pet and a second state of the target object; acquiring a surrounding environment video of the target pet if the target pet is determined to be in an initial dangerous state according to the first state and the second state; and analyzing the surrounding environment, determining that the target pet is in a dangerous state if the surrounding environment is an interference-free environment, and controlling an warning device carried by the target pet to send a warning message to timely prevent a pet from being stolen.

Adaptive eye tracking machine learning model engine

In various examples, an adaptive eye tracking machine learning model engine (“adaptive-model engine”) for an eye tracking system is described. The adaptive-model engine may include an eye tracking or gaze tracking development pipeline (“adaptive-model training pipeline”) that supports collecting data, training, optimizing, and deploying an adaptive eye tracking model that is a customized eye tracking model based on a set of features of an identified deployment environment. The adaptive-model engine supports ensembling the adaptive eye tracking model that may be trained on gaze vector estimation in surround environments and ensemble based on a plurality of eye tracking variant models and a plurality of facial landmark neural network metrics.

METHOD AND SYSTEM FOR CONFIDENCE LEVEL DETECTION FROM EYE FEATURES

State of art techniques attempt in extracting insights from eye features, specifically pupil with focus on behavioral analysis than on confidence level detection. Embodiments of the present disclosure provide a method and system for confidence level detection from eye features using ML based approach. The method enables generating overall confidence level label based on the subject's performance during an interaction, wherein the interaction that is analyzed is captured as a video sequence focusing on face of the subject. For each frame facial features comprising an Eye-Aspect ratio, a mouth movement, Horizontal displacements, Vertical displacements, Horizontal Squeezes and Vertical Peaks, are computed, wherein HDs, VDs, HSs and VPs are features that are derived from points on eyebrow with reference to nose tip of the detected face. This is repeated for all frames in the window. A Bi-LSTM model is trained using the facial features to derive confidence level of the subject.

IMPROVED FACE LIVENESS DETECTION USING BACKGROUND/FOREGROUND MOTION ANALYSIS
20230222842 · 2023-07-13 ·

Face recognition systems are vulnerable to the presentation of spoofed faces, which may be presented to face recognition systems, for example, by an unauthorized user seeking to gain access to a protected resource. A face liveness detection method that addresses this vulnerability utilizes motion analysis to compare the relative movement among three regions of interest in a facial image and based upon that comparison to make a face liveness determination.

Liveness detection
11704939 · 2023-07-18 ·

Biometrics are increasingly used to provide authentication and/or verification of a user in many security and financial applications for example. However, “spoof attacks” through presentation of biometric artefacts that are “false” allow attackers to fool these biometric verification systems. Accordingly, it would be beneficial to further differentiate the acquired biometric characteristics into feature spaces relating to live and non-living biometrics to prevent non-living biometric credentials triggering biometric verification. The inventors have established a variety of “liveness” detection methodologies which can block either low complexity spoofs or more advanced spoofs. Such techniques may provide for monitoring of responses to challenges discretely or in combination with additional aspects such as the timing of user's responses, depth detection within acquired images, comparison of other images from other cameras with database data etc.

IMAGE-BASED FITTING OF A WEARABLE COMPUTING DEVICE
20230020652 · 2023-01-19 ·

A system and method are provided for sizing and fitting a head mounted wearable computing device for a user based on image data of the head of the user, including a known reference device having a known scale. The system and method may include capturing image data including a face of the user to be fitted for the head mounted wearable computing device. The known reference device having the known scale is compared to features detected in the image data to determine a scaling factor. The scaling factor is used to size, or assign measures to facial features detected in the image data. A three-dimensional model of the head of the user may be generated from the captured image data.

DEVICE, SYSTEM AND METHOD FOR VERIFIED SELF-DIAGNOSIS

Methods and systems are provided for verifying results of a self-test by a subject using a test kit. The subject's identity may be verified, for example using AI-assistant facial recognition and/or data obtained from scanned government issued documents of the subject. Images obtained while the test is conducted may be used to determine if the test is conducted properly and images obtained of the completed self-test may be analyzed to determine the test results. Test results, verified as belonging to the subject and having been correctly performed, may be uploaded to a remote database as part of a health “passport” program.

FACIAL STRUCTURE ESTIMATING DEVICE, FACIAL STRUCTURE ESTIMATING METHOD, AND FACIAL STRUCTURE ESTIMATING PROGRAM
20230222815 · 2023-07-13 · ·

A facial structure estimating device 10 includes an acquiring unit 11 and a controller 13. The acquiring unit 11 acquires a facial image. The controller 13 functions as an identifier 15, an estimator 16, and an evaluator 17. The identifier 15 identifies an individual based on a facial image. The estimator 16 estimates a facial structure based on the facial image. The evaluator 17 calculates a validity of the facial structure estimated by the estimator 16. The evaluator 17 allows facial images and facial structures whose validity is greater than or equal to a threshold to be applied to training of the estimator 16. The controller 13 causes application of facial images and facial structures whose validity is greater than or equal to a threshold to training of the estimator 16 to be based on identification results of individuals produced by the identifier 15.

METHODS, SYSTEMS, AND MEDIA FOR CONTEXT-AWARE ESTIMATION OF STUDENT ATTENTION IN ONLINE LEARNING

Methods, systems and media for context-aware estimation of student attention in online learning are described. An attention monitoring system filters or restricts the time periods in which student attention is monitored or assessed to those time periods in which student attention is important. These time periods of high attention importance may be determined by processing data from the teacher, such as audio data representing the teacher's voice and/or visual presentation data representing slides or other visual material being presented to the students. Various types of presenter data from the teacher and attendee data from the students may be used in assessing the importance of attention and each student's attention during each time period. The presenter may be provided with feedback in various forms showing student attention performance aggregated or segmented according to various criteria.