Patent classifications
G06V10/806
IMAGE FEATURE COMBINATION FOR IMAGE-BASED OBJECT RECOGNITION
Methods, systems, and articles of manufacture to improve image recognition searching are disclosed. In some embodiments, a first document image of a known object is used to generate one or more other document images of the same object by applying one or more techniques for synthetically generating images. The synthetically generated images correspond to different variations in conditions under which a potential query image might be captured. Extracted features from an initial image of a known object and features extracted from the one or more synthetically generated images are stored, along with their locations, as part of a common model of the known object. In other embodiments, image recognition search effectiveness is improved by transforming the location of features of multiple images of a same known object into a common coordinate system. This can enhance the accuracy of certain aspects of existing image search/recognition techniques including, for example, geometric verification.
Fine-Motion Virtual-Reality or Augmented-Reality Control Using Radar
This document describes techniques for fine-motion virtual-reality or augmented-reality control using radar. These techniques enable small motions and displacements to be tracked, even in the millimeter or sub-millimeter scale, for user control actions even when those actions are small, fast, or obscured due to darkness or varying light. Further, these techniques enable fine resolution and real-time control, unlike conventional RF-tracking or optical-tracking techniques.
SPOOFING DETECTION APPARATUS, SPOOFING DETECTION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
A spoofing detection apparatus comprises obtaining, from an image capture apparatus, a first image frame that includes the face of a subject person obtained when a light-emitting apparatus is emitting light and a second image frame that includes the face of the subject person obtained when the light-emitting apparatus is turned off, extracting, from the first image frame, information specifying a face portion of the subject person, and extract, from the second image frame, information specifying a face portion of the subject person, extracting a portion that includes a bright point formed by reflection in an iris region of an eye of the subject person, from the first image frame, extracts a portion corresponding to the portion that includes the bright point, from the second image frame, and calculates a feature that is independent of the position of the bright point, and determining authenticity of subject person based on the feature.
METHOD FOR OUTPUTTING, COMPUTER-READABLE RECORDING MEDIUM STORING OUTPUT PROGRAM, AND OUTPUT DEVICE
A computer-implemented outputting method including: generating a correction vector that corrects a vector based on information of a first modal on the basis of correlation between the vector based on the information of the first modal and a vector based on information of a second modal; combining the generated correction vector with the vector based on the information of the first modal; compressing the combined vector based on the information of the first modal according to a predetermined rule; performing normalization processing for the compressed vector based on the information of the first modal; and outputting a vector obtained by the normalization processing.
METHOD FOR GENERATING IMAGE LABEL, AND DEVICE
Provided is a method for generating an image label, including: acquiring a partial image of a target image after acquiring the target image with a label to be generated; then, acquiring a plurality of features based on the target image and the partial image, wherein the plurality of features include a first feature of the target image and a second feature of the partial image; and finally, generating a first-type image label of the target image based on the first feature and the second feature.
METHODS OF PERFORMING REAL-TIME OBJECT DETECTION USING OBJECT REAL-TIME DETECTION MODEL, PERFORMANCE OPTIMIZATION METHODS OF OBJECT REAL-TIME DETECTION MODEL, ELECTRONIC DEVICES AND COMPUTER READABLE STORAGE MEDIA
The present disclosure relates to a method of performing real-time object detection using an object real-time detection model and a performance optimization method of object real-time detection model. According to an embodiment, the method of performing real-time object detection using an object real-time detection model includes: obtaining an identification image of a preset size by pre-processing an input image; obtaining object central point data and object size data by processing the identification image using the object real-time detection model; and obtaining an object detection result by determining an object region in the input image according to the object central point data and the object size data.
Method and Apparatus for Generating Reenacted Image
A method of generating a reenacted image includes: extracting a landmark from each of a driver image and a target image; generating a driver feature map based on pose information and expression information of a first face shown in the driver image; generating a target feature map and a pose-normalized target feature map based on style information of a second face shown in the target image; generating a mixed feature map by using the driver feature map and the target feature map; and generating the reenacted image by using the mixed feature map and the pose-normalized target feature map.
Autonomous thinking pattern generator
An autonomous thinking pattern generator including a pattern converter configured to convert input information to patterns, the input information including image information, sound information or language, a pattern recorder configured to record the patterns, a pattern controller configured to set and change the patterns, and form connective relations between the patterns, and an information analyzer configured to evaluate values of the input information is provided. The pattern recorder is configured to record the patterns corresponding to the input information which is determined as worthy by the information analyzer autonomously.
FACIAL EXPRESSION RECOGNITION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
A facial expression recognition method includes extracting a first feature from color information of pixels in a first image, and extracting a second feature of facial key points from the first image. The method further includes combining the first feature and the second feature, to obtain a fused feature, and determining, by processing circuitry of an electronic device, a first expression
3D object detection method based on multi-view feature fusion of 4D RaDAR and LiDAR point clouds
A 3D object detection method based on multi-view feature fusion of 4D RaDAR and LiDAR point clouds includes simultaneously acquiring RaDAR point cloud data and LiDAR point cloud data; and inputting the RaDAR point cloud data and the lidar point cloud data into a pre-established and trained RaDAR and LiDAR fusion network and outputting a 3D object detection result, wherein the RaDAR and LiDAR fusion network is configured to learn interaction information of a LiDAR and a RaDAR from a bird's eye view and a perspective view, respectively, and concatenate the interaction information to achieve fusion of the RaDAR point cloud data and the lidar point cloud data. The method can combine advantages of RaDAR and LiDAR, while avoiding disadvantages of the two modalities as much as possible to obtain a better 3D object detection result.