Patent classifications
G06V10/809
Vision analysis and validation system for improved inspection in robotic assembly
A vision analytics and validation (VAV) system for providing an improved inspection of robotic assembly, the VAV system comprising a trained neural network three-way classifier, to classify each component as good, bad, or do not know, and an operator station configured to enable an operator to review an output of the trained neural network, and to determine whether a board including one or more “bad” or a “do not know” classified components passes review and is classified as good, or fails review and is classified as bad. In one embodiment, a retraining trigger to utilize the output of the operator station to train the trained neural network, based on the determination received from the operator station.
METHOD AND APPARATUS WITH ADAPTIVE OBJECT TRACKING
Disclosed is a method and apparatus for adaptive tracking of a target object. The method includes method of tracking an object, the method including estimating a dynamic characteristic of an object in an input image based on frames of the input image, determining a size of a crop region for a current frame of the input image based on the dynamic characteristic of the object, generating a cropped image by cropping the current frame based on the size of the crop region, and generating a result of tracking the object for the current frame using the cropped image.
System for identifying a defined object
System/method identifying a defined object (e.g., hazard): a sensor detecting and defining a digital representation of an object; a processor (connected to the sensor) which executes two techniques to identify a signature of the defined object; a memory (connected to the processor) storing reference data relating to two signatures derived, respectively, by the two techniques; responsive to the processor receiving the digital representation from the sensor, the processor executes the two techniques, each technique assessing the digital representation to identify any signature candidate defined by the object, derive feature data from each identified signature candidate, compare the feature data to the reference data, and derive a likelihood value of the signature candidate corresponding with the respective signature; combining likelihood values to derive a composite likelihood value and thus determine whether the object in the digital representation is the defined object.
Method, device, apparatus and storage medium for facial matching
The present disclosure provides a method, a device, an apparatus and storage medium for facial matching, wherein the method includes: acquiring an image to be matched; conducting matching for the image to be matched based on at least one of an original sample database and an associative sample database; and outputting a final matching result, wherein the original sample database includes an original sample image, and the associative sample database includes an associative sample image which is formed by adding an associative feature to the original sample image. Herein, obtaining the original sample database and the associative sample database comprises: acquiring the original sample image; obtaining the original sample database based on the original sample image; adding the associative feature to the original sample image in the original sample database and generating the associative sample image, to obtain the associative sample database.
Method and apparatus for human behavior recognition, and storage medium
A method and an apparatus for human behavior recognition, and a storage medium, the method includes: obtaining a human behavior video captured by a camera; extracting a start point and an end point of a human motion from the human behavior video, where the human motion between the start point and the end point corresponds to a sliding window; determining whether the sliding window is a motion section; and if the sliding window is a motion section, anticipating a motion category of the motion section using a pre-trained motion classifying model. Thus, accurate anticipation of a motion in a human behavior video captured by a camera is realized without human intervention.
OBJECT DETECTION AND IMAGE CROPPING USING A MULTI-DETECTOR APPROACH
Systems, methods and computer program products for detecting objects using a multi-detector are disclosed, according to various embodiments. In one aspect, a computer-implemented method includes defining analysis profiles, where each analysis profile: corresponds to one of a plurality of detectors, and comprises: a unique set of analysis parameters and/or a unique detection algorithm. The method further includes analyzing image data in accordance with the analysis profiles; selecting an optimum analysis result based on confidence scores associated with different analysis results; and detecting objects within the optimum analysis result. According to additional aspects, the analysis parameters may define different subregions of a digital image to be analyzed; a composite analysis result may be generated based on analysis of the different subregions by different detectors; and the optimum analysis result may be based on the composite analysis result.
Method, apparatus and system for liveness detection, electronic device, and storage medium
A method for liveness detection includes: acquiring a first depth map captured by a depth sensor and a first target image captured by an image sensor; performing quality detection on the first depth map to obtain a quality detection result of the first depth map; and determining a liveness detection result of a target object in the first target image based on the quality detection result of the first depth map. The present disclosure can improve the accuracy of liveness detection.
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
Methods, apparatuses, electronic devices, and computer readable storage media for image processing are provided. In one aspect, an image processing method includes: determining a plurality of image feature maps of a target image, the plurality of image feature maps corresponding to different preset scales; determining, based on the plurality of image feature maps and for each pixel of pixels in the target image, a first probability that the pixel in the target image belongs to a foreground and a second probability that the pixel in the target image belongs to a background; and performing panoramic segmentation on the target image based on the plurality of image feature maps, the first probabilities of the pixels in the target image, and the second probabilities of the pixels in the target image.
METHOD AND APPARATUS FOR DETECTING LIVENESS BASED ON PHASE DIFFERENCE
A method and apparatus for detecting a liveness based on a phase difference are provided. The method includes generating a first phase image based on first visual information of a first phase, generating a second phase image based on second visual information of a second phase, generating a minimum map based on a disparity between the first phase image and the second phase image, and detecting a liveness based on the minimum map.
SYSTEMS AND METHODS FOR CAMERA-LIDAR FUSED OBJECT DETECTION WITH LOCAL VARIATION SEGMENTATION
Systems and methods for object detection. Object detection may be used to control autonomous vehicle(s). For example, the methods comprise: obtaining, by a computing device, a LiDAR dataset generated by a LiDAR system of the autonomous vehicle; and using, by the computing device. The LiDAR dataset and image(s) are used to detect an object that is in proximity to the autonomous vehicle. The object is detected by: computing a distribution of object detections that each point of the LiDAR dataset is likely to be in; creating a plurality of segments of LiDAR data points using the distribution of object detections; and detecting the object in a point cloud defined by the LiDAR dataset based on the plurality of segments of LiDAR data points. The object detection may be used to facilitate at least one autonomous driving operation.