Patent classifications
G06V40/16
STORAGE MEDIUM, DETERMINATION DEVICE, AND DETERMINATION METHOD
A non-transitory computer-readable storage medium storing a determination program that causes at least one computer to execute a process, the process includes acquiring a group of captured images that includes images including a face to which markers are attached; selecting, from a plurality of patterns that indicates a transition of positions of the markers, a first pattern that corresponds to a time-series change in the positions of the markers included in consecutive images among the group of captured images; and determining occurrence intensity of an action based on a determination criterion of the action determined based on the first pattern and the positions of the markers included in a captured image included after the consecutive images among the group of captured images.
SKINCARE AND FACIAL SCANNING SYSTEMS AND METHODS FOR PROVIDING SKINCARE DEVICE CONNECTIVITY AND SETTING CONFIGURATIONS
Skincare and facial scanning systems and methods are disclosed herein for providing skincare device connectivity and setting configurations. A skincare device comprising a sensor is configure to scan, and to deposit a cosmetic ink composition onto, the skin of a user. A skincare application (app), communicatively coupled to the skincare device, generates a user-specific electronic analysis based on use of the skincare device on skin of a face portion of the user, wherein at least a portion of the user-specific electronic analysis is configured for display on a graphic user interface. In some aspects, the skincare device comprises an identification certificate configured to uniquely identify the skincare device to the skincare app upon a connection between the skincare device and the skincare app. The connection may be a persistent connection maintaining connectivity between the skincare device and the skincare app for a plurality of uses of the skincare device.
BIOMETRIC GALLERY MANAGEMENT USING WIRELESS IDENTIFIERS
Biometric gallery management is performed by association one or more wireless identifiers that correspond to one or more mobile devices (such as smart phones, tablet computing devices, cellular telephones, wearable devices, smart watches, fitness monitors, digital media players, medical devices, and/or other mobile computing devices) that people carry with digital representations of biometrics corresponding to the people. Wireless identifiers corresponding to mobile devices proximate to a biometric reader device may be monitored. Upon detection of wireless identifiers corresponding to mobile devices proximate to the biometric reader device, the associated digital representations of biometrics may be loaded from a main gallery into one or more local galleries, which may then be used to perform one or more biometric identifications and/or verifications.
PROJECTION ON A VEHICLE WINDOW
A system includes a camera aimed externally to a vehicle, a window of the vehicle, a projector positioned to project on the window, and a computer communicatively coupled to the camera and the projector. The computer is programmed to, upon receiving data from the camera indicating a first person outside the vehicle, instruct the projector to project an image on the window depicting a second person inside the vehicle.
DATA OBTAINING METHOD AND APPARATUS
A first frame of time of flight (TOF) data including projection off data and infrared data is obtained, and after determining that a data block satisfying that a number of data points with values greater than a first threshold is greater than a second threshold is present in the infrared data, TOF data for generating a first frame of a TOF image is obtained based on a difference between the infrared data and the projection off data. Because the data block satisfying the number of data points with values greater than the first threshold is greater than the second threshold is an overexposed data block, and the projection off data is TOF data acquired by a TOF camera with a TOF light source being off, the difference between the infrared data and the projection off data can correct the overexposure, improving quality of the first frame of the TOF image.
APPARATUS AND METHOD WITH IMAGE RECOGNITION-BASED SECURITY
An apparatus and a method with image recognition-based security are disclosed. The method includes, for an unlocked terminal, tracking a face detected in a previous frame, detecting a background region change between the previous frame and a current frame based on a region of the tracked face, when the background region change is not detected, determining whether a state maintenance time fails to meet a preset time, in response to the state maintenance time failing to meet the preset time, determining an operation mode to be a first operation mode for determining whether recognition succeeds for the current frame, performing the first operation mode, including performing face detection with respect to the current frame, and maintaining the unlocked state of the terminal for the current frame when the face is detected as a result of the performing of the face detection, representing that the recognition succeeded for the current frame.
SYSTEMS AND METHODS FOR EVALUATING HEALTH OUTCOMES
A system and method for determining a health outcome, comprising: receiving first and second images or videos of a wound of a patient; comparing the images or videos to detect a characteristic of the wound, the characteristic including an identification of a change in the wound; receiving at least one non-image or non-video data input that includes data about the patient; executing a machine learning algorithm comprising a dataset of images or videos to analyze the identified change in the wound and to correlate at least one first image or video and at least one second image or video with the at least one non-image or non-video data input and to train the machine learning algorithm with the identification of a change in the wound; and generating a medical outcome prediction regarding a status and recovery of the patient in response to correlating the at least one additional input with the first and second images or videos.
Messaging system with augmented reality makeup
Systems, methods, and computer readable media for messaging system with augmented reality (AR) makeup are presented. Methods include processing a first image to extract a makeup portion of the first image, the makeup portion representing the makeup from the first image and training a neural network to process images of people to add AR makeup representing the makeup from the first image. The methods may further include receiving, via a messaging application implemented by one or more processors of a user device, input that indicates a selection to add the AR makeup to a second image of a second person. The methods may further include processing the second image with the neural network to add the AR makeup to the second image and causing the second image with the AR makeup to be displayed on a display device of the user device.
Person replacement utilizing deferred neural rendering
Techniques are disclosed for performing video synthesis of audiovisual content. In an example, a computing system may determine first parameters of a face and body of a source person from a first frame in a video shot. The system also determines second parameters of a face and body of a target person. The system determines that the target person is a replacement for the source person in the first frame. The system generates third parameters of the target person based on merging the first parameters with the second parameters. The system then performs deferred neural rendering of the target person based on a neural texture that corresponds to a texture space of the video shot. The system then outputs a second frame that shows the target person as the replacement for the source person.
Facial synchronization utilizing deferred neural rendering
Techniques are disclosed for performing video synthesis of audiovisual content. In an example, a computing system may determine first facial parameters of a face of a particular person from a first frame in a video shot, whereby the video shot shows the particular person speaking a message. The system may determine second facial parameters based on an audio file that corresponds to the message being spoken in a different way from the video shot. The system may generate third facial parameters by merging the first and the second facial parameters. The system may identify a region of the face that is associated with a difference between the first and second facial parameters, render the region of the face based on a neural texture of the video shot, and then output a new frame showing the face of the particular person speaking the message in the different way.