G06V40/164

Face retrieval method and apparatus, server, and storage medium

A face retrieval method is applied to a face retrieval system and performed by a computing device, the face retrieval system including a retrieval device cluster, the retrieval device cluster including at least one node. The method includes acquiring a face image, parsing the face image to obtain a first facial feature, and generating a first retrieval instruction according to the first facial feature, the first retrieval instruction carrying the first facial feature. The method further includes selecting a first node from the retrieval device cluster according to a load balancing rule, the first node including a first retrieval server. Finally, the method includes transmitting the first retrieval instruction to the first retrieval server, to trigger the first retrieval server to execute the first retrieval instruction to retrieve the first facial feature, to obtain a first retrieval result.

OBJECT TRACKING ANIMATED FIGURE SYSTEMS AND METHODS

An animation system includes an animated figure, multiple sensors, and an animation controller that includes a processor and a memory. The memory stores instructions executable by the processor. The instructions cause the animation controller to receive guest detection data from the multiple sensors, receive shiny object detection data from the multiple sensors, determine an animation sequence of the animated figure based on the guest detection data and shiny object detection data, and transmit a control signal indicative of the animation sequence to cause the animated figure to execute the animation sequence. The guest detection data is indicative of a presence of a guest near the animated figure. The animation sequence is responsive to a shiny object detected on or near the guest based on the guest detection data and the shiny object detection data.

FACE DETECTION METHOD, APPARATUS, AND DEVICE, AND TRAINING METHOD, APPARATUS, AND DEVICE FOR IMAGE DETECTION NEURAL NETWORK

A face detection method includes: acquiring a target image; invoking a face detection network, and processing the target image by using a feature extraction structure of the face detection network, to obtain original feature maps corresponding to the target image; the original feature maps having different resolutions; processing the original feature maps by using a feature enhancement structure of the face detection network, to obtain an enhanced feature map corresponding to each original feature map; the feature enhancement structure being obtained by searching a search space, and the search space used for searching the feature enhancement structure being determined based on a detection objective of the face detection network and a processing object of the feature enhancement structure; and processing the enhanced feature map by using a detection structure of the face detection network, to obtain a face detection result of the target image.

TECHNIQUES FOR AUTOMATICALLY EXTRACTING COMPELLING PORTIONS OF A MEDIA CONTENT ITEM
20220277564 · 2022-09-01 ·

In various embodiments, a clip application computes a set of appearance values for an appearance metric based on shot sequences associated with a media content item. Each appearance value in the set of appearance values indicates a prevalence of a first character in a different shot sequence associated with the media content item. The clip application then performs one or more clustering operations on the shot sequences based on the set of appearance values to generate a first shot cluster. Subsequently, the clip application generates a clip for the media content item based on the first shot cluster. The clip application transmits the clip to an endpoint device for display. Advantageously, relative to primarily manual approaches, the clip application can more efficiently and reliably generate clips for media content items.

RGB-NIR DUAL CAMERA FACE ANTI-SPOOFING METHOD

A method of face anti-spoofing, comprising, receiving a near infra-red facial image, having a near infrared channel, receiving a red-green-blue facial image, having a red channel, a green channel and a blue channel, generating a synthetic three channel image based on the near infrared channel, the red channel, the green channel and the blue channel and training a deep neural network based on the synthetic three channel image.

Systems and Methods for 3D Facial Modeling
20220254105 · 2022-08-11 · ·

In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.

Contextual masking of objects in social photographs

According to at least one embodiment, a method, computer system, and computer program product for contextually masking visual elements in a photograph is provided. The present invention may include receiving privacy preferences from one or more users, identifying individuals in a photograph, constructing a ruleset based on the privacy preferences of the identified individuals within the photograph and, based on the ruleset, masking one or more visual elements within the photograph from view of a viewer.

Feature amount management apparatus and method
11386705 · 2022-07-12 · ·

According to one embodiment, a feature amount management apparatus includes a data generation unit, an ID generation unit, a storage unit, and a deletion unit. The data generation unit generates, from an image, feature amount data indicating a feature amount of biometric information of a person. The ID generation unit generates identification information including expiration date information used for determining an expiration date of the feature amount data. The storage unit stores the feature amount data in correlation with the identification information. The deletion unit deletes the feature amount data when the feature amount data pass the expiration date.

Hygienic device interaction in retail environments

A device includes a display, a camera, a memory storing a software code, and a hardware processor configured to execute the software code to: configure the device to be in a first mode; receive, from the camera, camera data of an environment surrounding the device; determine that a person is present in the environment based on the camera data; determine that the person is facing the display based on the camera data; and transition the device from the first mode to a second mode, in response to determining that the person is facing the display. The display displays a first content when the device is in the first mode, and displays a second content different than the first content when the device is in the second mode. The second content is configured to provide information about the device to the person without requiring the person to touch the device.

APPARATUS AND METHOD FOR DETECTING FACIAL POSE, IMAGE PROCESSING SYSTEM, AND STORAGE MEDIUM

The present disclosure discloses an apparatus and a method for detecting a facial pose, an image processing system, and a storage medium. The apparatus comprises: an obtaining unit to obtain at least three keypoints of at least one face from an input image based on a pre-generated neural network, wherein coordinates of the keypoints obtained via a layer in the neural network for obtaining coordinates are three-dimensional coordinates; and a determining unit to determine, for the at least one face, a pose of the face based on the obtained keypoints, wherein the determined facial pose includes at least an angle. According to the present disclosure, the accuracy of the three-dimensional coordinates of the facial keypoints can be improved, thus the detection precision of a facial pose can be improved.