Patent classifications
G06V10/757
Method and device for geometric analysis of a part surface
A computer-implemented method and device are directed to a geometric analysis of a result of a manufacturing process or of a simulation of a manufacturing process in which a part (14) is formed from a planar sheet of material by means of a tool (1). The result comprises result model, being a computer based representation of the part after the (real or simulated) manufacturing process. The method comprises the computer-implemented steps of retrieving the result model (2); retrieving a reference model (3), the reference model being a mesh based model derived from a CAD model representing a target shape of the part or a tool shape; determining an improved result model (33) by transforming the mesh of the reference model (3) to match the shape of the result model (2); performing a geometric analysis on the basis of the improved result model (33).
METHOD AND SYSTEM FOR DETECTING AND ANALYZING OBJECTS
A method for detecting objects and labeling the objects with distances in an image includes steps of: obtaining a thermal image from a thermal camera, an RGB image from an RGB camera, and radar information from an mmWave radar; adjusting the thermal image based on the RGB image to generate an adjusted thermal image, and generating a fused image based on the RGB image and the adjusted thermal image; generating a second fused image based on the fused image and the radar information; detecting objects in the images, and generating, based on the fused image, another fused image including bounding boxes marking the objects; and determining motion parameters of the objects.
RECOGNITION OF OBJECTS IN IMAGES WITH EQUIVARIANCE OR INVARIANCE IN RELATION TO THE OBJECT SIZE
A method for recognizing at least one object in at least one input image. In the method, a template image of the object is processed by a first convolutional neural network (CNN) to form at least one template feature map; the input image is processed by a second CNN to form at least one input feature map; the at least one template feature map is compared to the at least one input feature map; it is evaluated from the result of the comparison whether and possibly at which position the object is contained in the input image, the convolutional neural networks each containing multiple convolutional layers, and at least one of the convolutional layers being at least partially formed from at least two filters, which are convertible into one another by a scaling operation.
Systems, methods, and devices for image matching and object recognition in images using textures
A computer-implemented method for determining whether a first image contains at least a portion of a second image includes: determining a first set of feature points associated with the first image; removing from said first set of feature points at least some feature points in the first set that correspond to one or more textures in the first image; and then attempting to match feature points in said first set of feature points with feature points in a second set of feature points associated with said second image to determine whether said first image contains at least a portion of said second image.
Determining position data
Aspects described herein relate to methods and systems for obtaining geographic position data for features that are captured through a device using a relative coordinate system, such as an optical sensor on a vehicle travelling along a route. The relative positions of the features are transformed into geographic positions by first establishing a spatial relationship between the relative coordinate system and a geographic coordinate system. Once the spatial relationship is known, it can be used to match a number of the features captured in the relative coordinate system to features having a known position within the geographic coordinate system, preferably, features that are known to have an accurately measured geographic position. The matched features can then be used as tie points between the relative coordinate system and the geographic coordinate system, and then used to transform unmatched features in the relative coordinate system to positions in the geographic coordinate system.
Object identification device, object identification method, and recording medium
An object identification device includes: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: compare a captured image against a plurality of identification images for identifying objects; and determine, after the comparison result indicates that a plurality of objects are included in the captured image, whether or not the plurality of objects are same objects, based on a first parameter indicating a geometric relation between the identification images and a second parameter indicating a geometric relation between identification image related to each identified object and the captured image.
METHOD FOR INDOOR LOCALIZATION AND ELECTRONIC DEVICE
The disclosure provides a method for indoor localization, a related electronic device and a related storage medium. A first image position of a target feature point of a target object is obtained and an identifier of the target feature point is obtained based on a first indoor image. A 3D spatial position of the target feature point is obtained through retrieval based on the identifier of the target feature point. The 3D spatial position is pre-determined based on a second image position of the target feature point on a second indoor image, a posture of a camera for capturing the second indoor image, and a posture of the target object on the second indoor image. An indoor position of the user is determined based on the first image position of the target feature point and the 3D spatial position of the target feature point.
METHOD FOR CLASSIFYING MEASURING POINTS OF A POINT CLOUD
A method for classifying measuring points of a point cloud ascertained by at least one sensor, in particular, a point cloud ascertained from a LIDAR sensor, a radar sensor and/or a camera sensor, via a control unit. Local surface vectors to adjacent measuring points are ascertained for each measuring point of the point cloud. For each local surface vector, respectively one angle is calculated between the local surface vectors with respect to a gravity vector. A maximal surface vector having a maximal angle with respect to the gravity vector and a standardized surface vector are ascertained for each measuring point of the point cloud based on the calculated angles. Each measuring point of the point cloud includes a standardized surface vector and/or includes a maximal surface vector having an angle with respect to the gravity vector above a limiting value being classified as a non-ground point.
SYSTEMS AND METHODS PERFORMING OBJECT OCCLUSION IN AUGMENTED REALITY-BASED ASSEMBLY INSTRUCTIONS
A method for performing object occlusion is disclosed. The method includes capturing an image of a physical item, determining a location and an orientation of a first virtual item with an augmented reality registration function; generating an augmented reality image, wherein the augmented reality image comprises a rendering of the first virtual item in the image using a first rendering function to depict the location and orientation of the first virtual item in the image and a rendering of a second virtual item in the image with a second rendering function; displaying the augmented reality image, wherein occlusion of the first virtual item by the physical item is shown in the augmented reality image based on occlusion of the first virtual item by the second virtual item wherein the first virtual item depicts a next step in the step-by-step instructions for the assembly.
Pattern Matching Tool
The present disclosure is directed to a software tool that engages in a pattern matching technique. In one implementation, the software tool retrieves a two-dimensional drawing and identifies walls as lines, rotates the drawing until a threshold number of lines are aligned with either the X or Y axes, discards lines that are not aligned with either the X or Y axis, identifies intersection points, identifies a subset of intersection points that have a maxima or minima coordinate, constructs a data library indicative of the relative positions of the points in the identified subset; and compares the constructed data libraries for the two-dimensional drawing to data libraries constructed for another two-dimensional drawing.