METHOD FOR GRASPING TEXTURE-LESS METAL PARTS BASED ON BOLD IMAGE MATCHING
20210271920 · 2021-09-02
Inventors
- ZAIXING HE (HANGZHOU, ZHEJIANG PROVINCE, CN)
- XINYUE ZHAO (HANGZHOU, ZHEJIANG PROVINCE, CN)
- ZHIWEI JIANG (HANGZHOU, ZHEJIANG PROVINCE, CN)
Cpc classification
G06V10/44
PHYSICS
B25J15/0033
PERFORMING OPERATIONS; TRANSPORTING
G06V10/50
PHYSICS
G05B2219/40564
PHYSICS
International classification
Abstract
A method for grasping texture-less metal parts based on BOLD image matching comprises: obtaining a real image and CAD template images by photographing, extracting a foreground part of the input part image, calculating a covariance matrix of a foreground image, establishing the direction of a temporary coordinate system, and setting directions of line segments to point to a first or second quadrant of the temporary coordinate system; constructing a descriptor of each line segment according to an angle relation between the line segment and k nearest line segments, and matching the descriptors of different line segments in the real image and the CAD template images to obtain line segment pairs; and recognizing a processed pose through a PNL algorithm to obtain a pose of a real texture-less metal part, and then inputting the pose of the real texture-less metal part to a mechanical arm to grasp the part. The present invention can correctly match line segments, obtain an accurate pose of the part by calculation, successfully grasp the part, and satisfy actual application requirements.
Claims
1. A method for grasping texture-less metal parts based on bundle of lines descriptor (BOLD) image matching, comprising the steps of: step 1: photographing a real texture-less metal part placed in a real environment by a real physical camera to obtain a real image; photographing a texture-less metal part computer aided design (CAD) model imported in a computer virtual scene by a virtual camera to obtain CAD template images; extracting a foreground part of the input real image and the input CAD template images, calculating a covariance matrix of the foreground part, and establishing a direction of a temporary coordinate system; step 2: processing the real image and all the CAD template images by means of a line segment detector, extracting edges in the real image and all the CAD template images and using the edges as line segments, traversing all the line segments in each said image, and setting directions of the line segments in the temporary coordinate system; step 3: for each said image, traversing all the line segments, and constructing a descriptor of each said line segment according to an angle relation between the line segment and k nearest line segments; step 4: in case of different k values of the descriptors k of the line segments in the real image and the CAD template images, matching the descriptors of different line segments in the real image and the CAD template images to obtain line segment pairs; and step 5: recognizing a processed pose by means of perspective n lines according to the matched line segment pairs to obtain a pose of the real texture-less metal part, and then inputting the pose of the real texture-less metal part into a mechanical arm to grasp the part.
2. The method for grasping texture-less metal parts based on BOLD image matching according to claim 1, wherein the texture-less metal part is a polyhedral metal part with a flat and smooth surface and free of pits, protrusions and textures.
3. The method for grasping texture-less metal parts based on BOLD image matching according to claim 1, wherein specifically, in Step 1, the foreground part of the images is extracted and used as a foreground image, a covariance matrix of the foreground image is calculated to obtain two feature values of the covariance matrix and feature vectors corresponding to the two feature values, the feature vector corresponding to a larger feature value is taken as an x-axis positive direction of the temporary coordinate system, and the other feature vector is taken as a y-axis positive direction of the temporary coordinate system.
4. The method for grasping texture-less metal parts based on BOLD image matching according to claim 1, wherein traversing all the line segments to set directions of the line segments in the step 2 is performed specifically as follows: a temporary coordinate system is established with any point on each said line segment as an origin of the temporary coordinate system; if the line segment passes through a first quadrant, the line segment points to the first quadrant of the temporary coordinate system; then: if the line segment passes through a second quadrant, the line segment points to the second quadrant of the temporary coordinate system; or, if the line segment does not pass through the first quadrant or the second quadrant, the line segment points to the first quadrant and the second quadrant of the temporary coordinate system.
5. The method for grasping texture-less metal parts based on BOLD image matching according to claim 1, wherein in the step 3, the k nearest line segments of each said line segment are selected in order according to distances between midpoints of the line segments.
6. The method for grasping texture-less metal parts based on BOLD image matching according to claim 1, wherein the step 3 is performed specifically as follows: 3.1: with two line segments s.sub.i and s.sub.j as one line segment and one nearest line segment thereof, a first angle α and a second angle β are calculated according to the following formula, and relative positions of the two line segments s.sub.i and s.sub.j are described by α and β;
7. The method for grasping texture-less metal parts based on BOLD image matching according to claim 1, wherein the step 4 is performed specifically as follows: 4.1: different k values for generating the descriptors of line segments in the real image and the CAD template images are k.sub.1 and k.sub.2, respectively; if k.sub.1=k.sub.2, the Euclidean distance between the descriptor of one said line segment in the real image and the descriptor of each said line segment in the CAD template images is calculated according to the following formula, two line segments corresponding to the nearest descriptors are selected and are regarded as matched to constitute a line segment pair:
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0040]
[0041]
[0042]
[0043]
[0044]
DETAILED DESCRIPTION OF THE INVENTION
[0045] The present invention will be further explained below in conjunction with the accompanying drawings and embodiments. The flow diagram of the present invention is illustrated by
[0046] A specific embodiment and an implementation process thereof of the present invention are as follows:
[0047] This embodiment is implemented with a U-shaped bolt as a texture-less metal part.
[0048] Step 1: a real texture-less metal part placed in a real environment is photographed by a real physical camera to obtain a real image; a texture-less metal part CAD model imported in a computer virtual scene is photographed by a virtual camera to obtain CAD template images; a foreground part of the input real image and the input CAD template images is extracted through a grabcut algorithm, a covariance matrix of the foreground part is calculated, and the direction of a temporary coordinate system is established.
[0049] The real image and the CAD template images are specifically processed as follows: a covariance matrix of a foreground image of an input image, feature values thereof, and corresponding vector features are calculated, the feature vector corresponding to a larger feature value is taken as an x-axis positive direction of the temporary coordinate system, and the other feature vector is taken as a y-axis positive direction of the temporary coordinate system, as shown in
[0050] Step 2: the real image and all the CAD template images are processed by means of a line segment detector (LSD), edges in the real image and all the CAD template images are extracted and used as line segments, all the line segments in each image are traversed, and directions of the line segments in the temporary coordinate system are set.
[0051] As shown in
[0052] Step 3: for each image, all the line segments are traversed, and a descriptor of each line segment is constructed according to an angle relation between the line segment and k nearest line segments;
[0053] As shown in
[0054] 3.1: with two line segments s.sub.i and s.sub.j as one line segment and a nearest line segment thereof, a first angle α and a second angle β are calculated according to the following formula, as shown in
[0055] 3.2: for each line segment in the images, the first angles α and the second angles β of the line segment and k nearest line segments are obtained according to Step 3.1, that is, a constant contrast-based BOLD of each line segment is constructed by k pairs of first angles α and second angles β, which form a matrix to represent the descriptor.
[0056] In actual implementation, each pair of first angle α and second angle β can be discretely accumulated into a 2D joint histogram, and in this specification, the discrete step length is set to π/12, and the 2D joint histogram is the descriptor of the line segment.
[0057] Step 4: in case of different k values for generating the descriptors of line segments in the real image and the CAD template images, the descriptors of different line segments in the real image and the CAD template images are matched to obtain line segment pairs;
[0058] Finally, mismatches are removed through an RANSAC algorithm, an output matching result is shown in
[0059] Step 5: a processed pose is recognized by means of perspective n lines (PNL) according to the matched line segment pairs to obtain a pose of the real texture-less metal part, and then the pose of the real texture-less metal part is input to a mechanical arm to grasp the part.
[0060] The preferred embodiments mentioned above are used to disclose the present invention, and are not intended to limit the present invention. Those ordinarily skilled in the art can make different modifications and embellishments without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention is defined by the claims.