G06T2207/30201

SYSTEM AND METHOD FOR GAZE AND POSE DETECTION TO ANTICIPATE OPERATOR INTENT
20230039764 · 2023-02-09 ·

A system and method for inferring operator intent by detecting operator focus incorporates cameras positioned within a cockpit or control space of a vehicle and oriented at an operator of the vehicle. The cameras capture images of the operator in a control seat; the images are analyzed (either individually or sequentially) to determine a gaze and/or body pose of the operator (including, e.g., a position and orientation of the torso and limbs). By comparing the determined gaze and/or body pose to the positions and orientations of potential focus targets within the control space (e.g., windows, display units, and/or control panels that the operator may engage with visually and/or physically), the system predicts the most likely future focus target or targets: what the operator is most likely to visually and/or physically engage with next. Operator intent may be further analyzed to identify potentially abnormal or anomalous behavior.

Three-dimensional (3D) shape modeling based on two-dimensional (2D) warping

An electronic device and method for 3D modeling based on 2D warping is disclosed. The electronic device acquires a color image of a face of a user, depth information corresponding to the color image, and a point cloud of the face. A 3D mean-shape model of a reference 3D face is acquired, and rigid aligned with the point cloud. A 2D projection of the aligned 3D mean-shape model is generated. The 2D projection includes a set of landmark points associated with the aligned 3D mean-shape model. The 2D projection is warped such that the set of landmark points in the 2D projection is aligned with a corresponding set of feature points in the color image. A 3D correspondence between the aligned 3D mean-shape model and the point cloud is determined for a non-rigid alignment of the aligned 3D mean-shape model, based on the warped 2D projection and the depth information.

Systems and methods for selecting a best facial image of a target human face

The present disclosure relates to systems and methods for selecting a best facial image of a target human face. The methods may include determining whether a candidate facial image is obtained before a time point in a time period threshold, wherein the candidate facial image has a greatest quality score of the target human face among a plurality of facial images of the target human face; in response to a determination that the candidate facial image is obtained before the time point, determining the candidate facial image as the best facial image of the target human face; and storing the best facial image together with a face ID and the greatest quality score of the target human face in a face log.

Personalized videos featuring multiple persons

Provided are systems and methods for personalized videos featuring multiple persons. An example method includes receiving a user selection of a video having at least one frame with metadata that include a first location and a second location and receiving an image of a source face and a further image of a further source face, modifying the image of the source face to generate an image of a modified source face and modifying the further image of the further source face to generate an image of a modified further source face, inserting, in the at least one frame of the video, the image of the modified source face at the first location and the image of the modified further source face at the second location to generate a personalized video, and sending the personalized video via a communication chat.

Automatic image-based skin diagnostics using deep learning

There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.

Facial recognition technology for improving motor carrier regulatory compliance

Methods for improving compliance with regulations pertaining to vehicle driving records are disclosed. One or more digital images from a camera mounted in a vehicle are received. Based on a determination that the vehicle has hours of service that have not been assigned to a driver, a subset of the one or more digital images corresponding to the hours of service are identified based on the timestamps. The subset of the one or more digital images are processed to identify a correspondence between a face of a person included in the one or more digital images and a face of a known person. Based on the correspondence transgressing a threshold level of correspondence, a user interface is generated for presentation on a device. The user interface includes an interactive user interface element for accepting a recommendation to assign the known person as the driver for the unassigned hours of service.

MIXED, VIRTUAL AND AUGMENTED REALITY HEADSET AND SYSTEM
20230011002 · 2023-01-12 ·

A mixed, virtual and augmented reality headset having a front casing (2) with a housing receiving a smartphone (19) facing the holographic display (5); a curved holographic display (5) in the front portion of the headset reflecting a projected image (11) via the display of a smartphone (19) and simultaneously allowing the user to see through same; a motorised mirror (14) positioned in a withdrawn position or in an extended position in front of the holographic display (5) reflecting the projected image (11) via the smartphone (19); two motorised lenses (15) positioned in a withdrawn position or in an extended position in front of the pupils (13) of the user; a mirror system (16) reflecting a real external image (10) with respect to the headset (1) towards a camera of the smartphone (19); and a control unit (50) controlling the position of the motorised lenses (15) and mirror (14).

BYSTANDER-CENTRIC PRIVACY CONTROLS FOR RECORDING DEVICES
20230011087 · 2023-01-12 ·

A recording device provides bystander-centric privacy controls for authorizing the storage of a bystander's identifying information (e.g., video or audio recordings of the bystander). Before a recording device can store identifying information of bystanders, the bystanders may indicate to the recording device whether they authorize the storage. If the bystanders do not authorize the storage, the recording device may modify the identifying information captured by sensors, such as a video camera or a microphone, such that the identity of the non-authorizing bystander is not identifiable through the modified identifying information. Thus, bystanders are given increased agency over whether they want to be recorded. Further, if the bystanders do not want to be recorded, sensor data that may identify them is modified by the recording device to prevent unwanted exposure of their identity in recorded content.

Facial verification method and apparatus

A facial verification method includes separating a query face image into color channel images of different color channels, obtaining a multi-color channel target face image with a reduced shading of the query face image based on a smoothed image and a gradient image of each of the color channel images, extracting a face feature from the multi-color channel target face image, and determining whether face verification is successful based on the extracted face feature.

Determination of position of a head-mounted device on a user

There is provided a method and system for determining if a head-mounted device for extended reality (XR) is correctly positioned on a user, and optionally performing a position correction procedure if the head-mounted device is determined to be incorrectly positioned on the user. Embodiments include: performing eye tracking by estimating, based on a first image of a first eye of the user, a position of a pupil in two dimensions; determining whether the estimated position of the pupil of the first eye is within a predetermined allowable area in the first image; and, if the determined position of the pupil of the first eye is inside the predetermined allowable area, concluding that the head-mounted device is correctly positioned on the user; or, if the determined position of the pupil of the first eye is outside the predetermined allowable area, concluding that the head-mounted device is incorrectly positioned on the user.