Patent classifications
G06V40/171
PHOTOGRAPH PROCESSING METHOD AND SYSTEM
Embodiments of the present invention provide a photograph processing method and system. The method includes: performing face detection on a photograph to obtain a detected human face; performing alignment on the detected human face, so as to obtain contour points of a left eye and a right eye of the detected human face; separately calculating a left eye area, being an area of the left eye, and a right eye area, being an area of the right eye, according to the contour points of the left eye and the right eye; performing stretching transformation on each pixel in the left eye area and the right eye area to generate a stretched left eye area and a stretched right eye area; and performing histogram equalization processing on the stretched left eye area and the stretched right eye area, so as to generate a processed photograph.
Object modeling using light projection
A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
SYSTEM AND METHOD FOR AUTOMATIC DRIVER IDENTIFICATION
A method for driver identification including recording a first image of a vehicle driver; extracting a set of values for a set of facial features of the vehicle driver from the first image; determining a filtering parameter; selecting a cluster of driver identifiers from a set of clusters, based on the filtering parameter; computing a probability that the set of values is associated with each driver identifier of the cluster; determining, at the vehicle sensor system, driving characterization data for the driving session; and in response to the computed probability exceeding a first threshold probability: determining that the new set of values corresponds to one driver identifier within the selected cluster, and associating the driving characterization data with the one driver identifier.
SPOOFING ATTACK DETECTION DURING LIVE IMAGE CAPTURE
In general, one innovative aspect of the subject matter described in this specification can be embodied in a computer-implemented method. The method includes, detecting, by an imaging device, the presence of an object to be imaged. The method further includes, measuring, by the imaging device, a first characteristic of the object to be imaged, and measuring, by the imaging device, a second characteristic of the object to be imaged. The method further includes, determining, by a computing device, that at least one of the first characteristic of the object or the second characteristic of the object exceeds a threshold; and in response to determining, indicating, by the computing device, whether the object to be imaged is one of a spoofed object or an actual object.
Systems for authenticating digital contents
A system for authenticating digital contents includes a computing platform having a hardware processor and a memory storing a software code. According to one implementation, the hardware processor executes the software code to receive digital content, identify an image of a person depicted in the digital content, determine an ear shape parameter of the person depicted in the image, determine another biometric parameter of the person depicted in the image, and calculate a ratio of the ear shape parameter of the person depicted in the image to the biometric parameter of the person depicted in the image. The hardware processor is also configured to execute the software code to perform a comparison of the calculated ratio with a predetermined value, and determine whether the person depicted in the image is an authentic depiction of the person based on the comparison of the calculated ratio with the predetermined value.
Face super-resolution realization method and apparatus, electronic device and storage medium
The present application discloses a face super-resolution realization method and apparatus, an electronic device and a storage medium, and relate to fields of face image processing and deep learning. The specific implementation solution is as follows: a face part in a first image is extracted; the face part is input into a pre-trained face super-resolution model to obtain a super-sharp face image; a semantic segmentation image corresponding to the super-sharp face image is acquired; and the face part in the first image is replaced with the super-sharp face image, by utilizing the semantic segmentation image, to obtain a face super-resolution image.
PASSAGE PERMIT DEVICE, SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM
A passage permit device includes at least one memory storing instructions, and at least one processor. The at least one processor is configured to execute the instructions to, acquire image data including a user's face image captured by a predetermined imaging device, cause an authentication device configured to store facial feature information about the user to perform face authentication on the image data, acquire, if the face authentication is successful, proof information about the user subjected to the face authentication from a storage device configured to store the proof information related to measures taken the user to prevent from being infected, determine whether or not to permit the user to pass based on the proof information associated with the user, output a result of the determination to a predetermined terminal device corresponding to the imaging device.
Occlusion Detection
An occlusion detection model training method is provided. The training method includes the following steps: constructing a plurality of pieces of training sample data, where the training sample data includes a first face image added with an occlusion object, coordinate values of a first key point in the first face image, and occlusion information of the first key point; and using the first face image as input data, and using the coordinate values of the first key point and the occlusion information of the first key point as output data, to train an occlusion detection model, so that the occlusion detection model outputs, based on any input second face image, coordinate values of a second key point included in the second face image and an occlusion probability of the second key point.
SYSTEM AND METHOD FOR CHARACTERIZING DROOPY EYELID
Embodiments pertain to a method for characterizing a droopy upper eyelid performed on a computer having a processor, memory, and one or more code sets stored in the memory and executed in the processor. The method may comprise capturing an image of a patient's facial features comprising an eye and a droopy upper eyelid; identifying at least one geometric feature of a pupil of the eye within the image; and determining, based on the at least one geometric feature, whether the droopy upper eyelid is vision impairing or not, or whether the droopy upper eyelid is more likely vision impairing than not vision-impairing.
GAMING ACTIVITY MONITORING SYSTEMS AND METHODS
Embodiments relate to systems, methods and computer readable media for gaming monitoring. In particular, embodiments process images to determine presence of a gaming object on a gaming table in the images. Embodiments estimate postures of one or more players in the images and based on the estimated postures determine a target player associated with the gaming object among the one or more players.