Patent classifications
G06V40/00
Display module and display device
A display module includes: a liquid crystal module, a cover plate, and a texture recognition unit. The texture recognition unit includes a first light source and a texture sensing module. The first light source is located at a side of the cover plate proximate to the liquid crystal module, and is configured to emit invisible light. The texture sensing module is located at a side of the liquid crystal module facing away from the cover plate. A light wavelength range of light allowed to pass through the cover plate and the liquid crystal module includes a light wavelength range of the invisible light. The texture sensing module is configured to collect reflected light after the invisible light is irradiated to a target object, so as to identify a texture of the target object.
Systems and Methods of User Identification Verification
Systems and methods for user identification (ID) document verification are provided. An exemplary method includes receiving, by a client device, an image of an ID document. Based on the image of the ID document, a determination is made whether the ID document includes a near-field communications (NFC) chip that stores an ID photo associated with the ID document. Based on this determination of whether the ID document includes an NFC chip, the ID document is verified by selectively using at least one of NFC chip authentication and optical authentication, to obtain a verification result.
Systems and Methods of User Identification Verification
Systems and methods for user identification (ID) document verification are provided. An exemplary method includes receiving, by a client device, an image of an ID document. Based on the image of the ID document, a determination is made whether the ID document includes a near-field communications (NFC) chip that stores an ID photo associated with the ID document. Based on this determination of whether the ID document includes an NFC chip, the ID document is verified by selectively using at least one of NFC chip authentication and optical authentication, to obtain a verification result.
System, method, and computer program product for real-time evaluation of psychological and physiological states using embedded sensors of a mobile device
A system including a mobile device with at least one sensor to collect data about a user based on movement of the user in possession of the mobile device, a movement feature analyzer configured to determine out-of-the-ordinary movement pattern made by the user, a physiological/psychological state classifier configured to classify a physiological/psychological state of the user based on the out-of-the-ordinary movement of the user based on the movement feature analyzer and report at least one of a magnitude and a level of the physiological/psychological state experienced by the user based on at least one of data collected about the motion experienced by the mobile device and data collected about the geographic location of the mobile device, and a notification device to provide notification that the user is experiencing a physiological/psychological state measured on the at least physiological/psychological state level and the physiological/psychological state magnitude.
CHARACTER ANIMATIONS IN A VIRTUAL ENVIRONMENT BASED ON RECONSTRUCTED THREE-DIMENSIONAL MOTION DATA
Methods, systems, and apparatus, including medium-encoded computer program products, for providing editable keyframe-based animation data for applying to a character to animate motion of the character in three-dimensional space. Three-dimensional motion data is constructed from two-dimensional videos. The three-dimensional motion data represents movement of people in the two-dimensional videos and includes, for each person, a root of a three-dimensional skeleton of the person. The three-dimensional skeleton comprises multiple three-dimensional poses of the person during at least a portion of frames of a video from the two-dimensional videos. The three-dimensional motion data is converted into editable keyframe-based animation data in three-dimensional space and provided to animate motion.
Automated vending machine with customer and identification authentication
Implementations include actions of receiving consumer-specific data and ID-specific data from an identification presented by a consumer to a vending machine, processing at least a portion of the ID-specific data to determine one or more of whether the identification is unexpired and whether the identification is authentic, and serving the consumer from the vending machine at least partially in response to determining that the identification is unexpired and that the identification is authentic and determining that the consumer is authentic relative to the identification.
METHOD AND APPARATUS FOR ACQUIRING OBJECT'S ATTENTION INFORMATION AND ELECTRONIC DEVICE
A method for acquiring an object's attention information includes: displaying a three-dimensional (3D) virtual image in a virtual scene, the 3D virtual image being a 3D image corresponding to a virtual object in the virtual scene mapped from a physical object; determining a target object of the 3D virtual image displayed in a display area; and based on the target object, determining an attention focus object of the 3D virtual image, the attention focus object belonging to at least a part of the 3D virtual image.
Contactless fingerprint capture using artificial intelligence and image processing on integrated camera systems
A fingerprinting solution that uses neural network (NN) based trained Machine Learning (ML) modules in combination with traditional image processing for contactless fingerprint capture, liveness detection to rule out fake fingers, and fingerprint matching using a portable handheld device with integrated camera, thereby eliminating the need for a special device dedicated for fingerprinting. The trained NN modules detect the size and direction of fingers in the captured image, check if fingers are reversed in the image (thereby making nails visible), check if the thumb of the correct hand is captured, and generate fixed-length fingerprint templates for subsequent matching of fingerprints. Three dimensional (3D) depth map of a finger is used to bring the fingerprint resolution to 500 dpi and eliminate distortion caused by the curvature of the finger shape to improve accuracy while scaling and flattening a fingerprint image. The solution facilitates contactless-to-contactless as well as contactless-to-contact based fingerprint matching.
DETERMINATION METHOD, NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING DETERMINATION PROGRAM, AND INFORMATION PROCESSING DEVICE
In a determination method, a computer executes processing including: generating face image data from which noise is removed by a specific algorithm from face image data when the face image data is acquired; generating difference image data concerning difference between the face image data that has been acquired and the face image data that has been generated; determining whether or not the face image data that has been acquired is a composite image based on information included in the difference image data; and determining whether or not the face image data that has been acquired is a composite image based on information included in frequency data generated from the difference image data in a case where the face image data that has been acquired is not determined to be a composite image.
INFORMATION PROCESSING METHOD AND DEVICE, AND STORAGE MEDIUM
An information processing method and device, and a storage medium are provided. The method includes: obtaining first input information, the first input information including at least an image containing a target object (101); obtaining, based on the first input information, captured images of the target object that are captured by an image acquisition device within a time period from N seconds before a target time point till N seconds after the target time point, the target time point being the time point when the image acquisition device captures the target object (102); determining companions of the target object from the captured images (103); and acquiring a companion identifying result by analyzing the one or more companions based on aggregated profile data, each person in the aggregated profile data corresponding to a unique profile (104).