Patent classifications
G06V40/161
Associating three-dimensional coordinates with two-dimensional feature points
An example method includes causing a light projecting system of a distance sensor to project a three-dimensional pattern of light onto an object, wherein the three-dimensional pattern of light comprises a plurality of points of light that collectively forms the pattern, causing a light receiving system of the distance sensor to acquire an image of the three-dimensional pattern of light projected onto the object, causing the light receiving system to acquire a two-dimensional image of the object, detecting a feature point in the two-dimensional image of the object, identifying an interpolation area for the feature point, and computing three-dimensional coordinates for the feature point by interpolating using three-dimensional coordinates of two points of the plurality of points that are within the interpolation area.
Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas
An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.
Automated video editing
A method of generating a modified video file using one or more processors is disclosed. The method comprises detecting objects that are represented in an original video file using computer vision object-detection techniques, determining object motion characteristics for the detected objects, based on a specific object motion characteristic for a specific detected object meeting certain requirements, selecting a corresponding audio or visual effect, and applying the corresponding visual effect to the original video file to create the modified video file.
Depth estimation using biometric data
Method of generating depth estimate based on biometric data starts with server receiving positioning data from first device associated with first user. First device generates positioning data based on analysis of a data stream comprising images of second user that is associated with second device. Server then receives a biometric data of second user from second device. Biometric data is based on output from a sensor or a camera included in second device. Server then determines a distance of second user from first device using positioning data and biometric data of the second user. Other embodiments are described herein.
Method and apparatus for authenticating a user of a computing device
A system for authenticating a user attempting to access a computing device or a software application executing thereon. A data storage device stores one or more digital images or frames of video of face(s) of authorized user(s) of the device. The system subsequently receives from a first video camera one or more digital images or frames of video of a face of the user attempting to access the device and compares the image of the face of the user attempting to access the device with the stored image of the face of the authorized user of the device. To ensure the received video of the face of the user attempting to access the device is a real-time video of that user, and not a forgery, the system further receives a first photoplethysmogram (PPG) obtained from a first body part (e.g., a face) of the user attempting to access the device, receives a second PPG obtained from a second body part (e.g., a fingertip) of the user attempting to access the device, and compares the first PPG with the second PPG. The system authenticates the user attempting to access the device based on a successful comparison of (e.g., correlation between, consistency of) the first PPG and the second PPG and based on a successful comparison of the image of the face of the user attempting to access the device with the stored image of the face of the authorized user of the device.
INFORMATION PROCESSING METHOD AND ELECTRONIC DEVICE
An information processing method and an electronic device are provided. The method is performed by a first wearable device, where the first wearable device includes a first image collector, and the method includes: obtaining a second face image by the first image collector and receiving a first face image from the second wearable device in a case that the first wearable device and a second wearable device are in a preset positional relationship; and processing first target information from the second wearable device in a case that the first face image matches the second face image.
FACE IMAGE PROCESSING METHOD AND APPARATUS, FACE IMAGE DISPLAY METHOD AND APPARATUS, AND DEVICE
A face image processing method and apparatus, a face image display method and apparatus, and a device are provided, belonging to the technical field of image processing. The method includes: acquiring a first face image of a person; invoking an age change model to predict a texture difference map of the first face image at a specified age, the texture difference map being used for reflecting a texture difference between a face texture in the first face image and a face texture of a second face image of the person at the specified age; and performing image processing on the first face image based on the texture difference map to obtain the second face image.
SOUND EFFECT ADJUSTMENT
A sound effect adjustment method is provided. In the method, a video frame and an audio signal of a corresponding time unit of a target video are obtained. A sound source orientation and a sound source distance of a sound source object in the video frame are determined. Scene information corresponding to the video frame is determined. The audio signal is filtered based on the sound source orientation and the sound source distance. An echo coefficient is determined according to the scene information. Further, an adjusted audio signal with an adjusted sound effect is generated based on the filtered audio signal and the echo coefficient.
IN-VEHICLE USER POSITIONING METHOD, IN-VEHICLE INTERACTION METHOD, VEHICLE-MOUNTED APPARATUS, AND VEHICLE
This application provides an in-vehicle user positioning method, an in-vehicle interaction method, a vehicle-mounted apparatus, and a vehicle. In an example, the in-vehicle user positioning method includes: obtaining a sound signal collected by an in-vehicle microphone; in response to that a first voice command is recognized from the sound signal, determining a first user who sends the first voice command; and determining an in-vehicle location of the first user based on a mapping relationship between an in-vehicle user and an in-vehicle location.
FACE AUTHENTICATION APPARATUS, CONTROL METHOD AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM THEREFOR, FACE AUTHENTICATION GATE APPARATUS, AND CONTROL METHOD AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM THEREFOR
A face authentication apparatus (50) includes an image generation unit (102) that generates a first image by capturing an image of a person, a control unit (104) that, when the first image does not satisfy a criterion for face collation, controls lighting and causes the image generation unit (102) to generate a second image by capturing an image of the person again, and a face authentication unit (52) that executes face authentication by using the second image.