Patent classifications
G06V10/757
METHOD FOR LEARNING REPRESENTATIONS FROM CLOUDS OF POINTS DATA AND A CORRESPONDING SYSTEM
A method for learning representations from clouds of points data includes encoding clouds of points data into at least one representation by creating at least one tensor representation out of the clouds of points data. The method further includes using a loss function that utilizes a noisy reconstruction for reducing overfitting.
Information processing apparatus, control method, and program
The information processing apparatus (2000) includes a feature point detection unit (2020), a determination unit (2040), an extraction unit (2060), and a comparison unit (2080). A feature point detection unit (2020) detects a plurality of feature points from the query image. The determination unit (2040) determines, for each feature point, one or more object images estimated to include the feature point. The extraction unit (2060) extracts an object region estimated to include the object in the query image in association with the object image of the object estimated to be included in the object region, on the basis of the result of the determination. The comparison unit (2080) cross-checks the object region with the object image associated with the object region and determines an object included in the object region.
CAMERA SETTING ADJUSTMENT BASED ON EVENT MAPPING
Systems, methods, and non-transitory media are provided for adjusting camera settings based on event data. An example method can include obtaining, via an image capture device of a mobile device, an image depicting at least a portion of an environment; determining a match between one or more visual features extracted from the image and one or more visual features associated with a keyframe; and based on the match, adjusting one or more settings of the image capture device.
Systems and Methods for Image Based Perception
Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.
POINT CLOUD DATA PROCESSING APPARATUS, POINT CLOUD DATA PROCESSING METHOD, AND PROGRAM
A point cloud data processing apparatus 11 includes a processor configured to acquire first form information that indicates a feature of a form of a first object, specify an object region of a second object that is identified from an image and that corresponds to the first form information, select second-object point cloud data, in point cloud data, that corresponds to the object region, on the basis of the object region, acquire second form information that indicates a feature of a form of the second object, on the basis of the second-object point cloud data, and compare the first form information with the second form information and perform determination as to whether the second object is the first object.
Annotating high definition map points with measure of usefulness for localization
According to an aspect of an embodiment, operations may comprise obtaining a first point cloud that includes a first point. The operations also comprises obtaining a second point cloud that is a copy of the first point cloud and that includes a second point that is a copy of the first point. The operations also comprises moving the second point cloud with respect to the first point cloud according to a first vector. The operations also comprises identifying a closest point of the first point cloud that is closest to the second point of the second point cloud. The operations also comprises determining a second vector between the closest point and the second point. The operations also comprises determining a measure of usefulness of the first point based on the first vector and the second vector. The operations also comprises indicating the measure of usefulness of the first point.
Fingerprint matching method and apparatus, electronic equipment and readable storage medium
The present disclosure provides a fingerprint matching method and apparatus, an electronic equipment and a readable storage medium. The method includes: extracting a plurality of to-be-matched feature points from the to-be-identified fingerprint image; performing a first matching between the plurality of to-be-matched feature points and a plurality of template feature points in the template fingerprint image, wherein the first matching includes: identifying true feature points in the plurality of to-be-matched feature points, and determining feature point pairs each of which includes a true feature point and a template feature point corresponding to the true feature point in the template fingerprint image as a first matching result; removing at least one falsely matched feature point pair from the first matching result; and performing a second matching between the to-be-identified fingerprint image and the template fingerprint image based on remaining feature point pairs in the first matching result.
Systems and methods for matching color and appearance of target coatings
System and methods for matching color and appearance of a target coating are provided herein. The system includes an electronic imaging device configured to receive a target image data of the target coating. The target image data includes target coating features. The system further includes one or more feature extraction algorithms that extracts the target image features from the target image data. The system further includes a machine-learning model that identifies a calculated match sample image from a plurality of sample images utilizing the target image features. The machine-learning model includes pre-specified matching criteria representing the plurality of sample images for identifying the calculated match sample image from the plurality of sample images. The calculated match sample image is utilized for matching color and appearance of the target coating.
Floorplan of a design for an integrated circuit
A computer-implemented method for comparing a first version of a floorplan of a design for an integrated circuit with a second version. The method comprises (i) generating a timing information for each net in the second version by determining whether timing information is available for the net in the first version; (ii) in case no timing information is available in the first version, generating the timing information for the second version by calculating a spatial distance and timing information between two points of the net using wire length differences between the first version and the second version; (iii) otherwise, generating the timing information for the second version by calculating a spatial distance and timing information between two points of the net using a wire reach table to obtain a wire delay.
DISTANCE DETERMINATION METHOD, APPARATUS AND SYSTEM
The present disclosure provides a distance determination method, apparatus and system, relating to the technical field of image processing. The method includes the following steps: acquiring a master visual image photographed by a master camera and an original auxiliary visual image photographed by an auxiliary camera; acquiring an initial matching point pair between the master visual image and the original auxiliary visual image through feature extraction and feature matching; correcting the original auxiliary visual image sequentially, based on the initial matching point pair and different constraints, so as to obtain a target auxiliary visual image, wherein the different constraints includes: a constraint of a minimum rotation angle and a constraint of a minimum parallax; and determining a focusing distance according to the master visual image and the target auxiliary visual image. The focusing distance can be determined more accurately.