METHOD AND DEVICE FOR 3D SHAPE MATCHING BASED ON LOCAL REFERENCE FRAME
20220343105 · 2022-10-27
Assignee
Inventors
- Dong LI (Shenzhen, Guangdong, CN)
- Sheng AO (Shenzhen, Guangdong, CN)
- Jindong TIAN (Shenzhen, Guangdong, CN)
- Yong TIAN (Shenzhen, Guangdong, CN)
Cpc classification
G06V10/462
PHYSICS
G06V20/653
PHYSICS
International classification
Abstract
A method and a device for 3D shape matching based on a local reference frame are proposed. After acquiring a 3D point cloud and feature points in the method, the feature point set is projected to a plane, and feature transformation is performed on the projected points by using at least one factor from the distances between the 3D points and the feature points, the distances between the 3D points and the projected points, and the average distances between the 3D points and its 1-ring neighboring points to acquire a point distribution with a larger variance in a certain direction than the projected point set, and the local reference frame is determined based on the transformed point distribution. The 3D local feature descriptor established based on this local reference frame can encode the 3D local surface information more robustly, so as to obtain a better 3D shape matching effect.
Claims
1. A method for 3D shape matching based on a local reference frame, comprising: acquiring a 3D point cloud of a real scene; acquiring a feature point p of the 3D point cloud of the real scene; establishing a local reference frame for a first spherical neighborhood of the feature point p, wherein an origin of the first spherical neighborhood coincides with the feature point p and the first spherical neighborhood has a support radius of R, and an origin of the local reference frame coincides with the feature point p and the local reference frame have an orthogonal and normalized x axis, y axis, and z axis; establishing a 3D local feature descriptor based on the local reference frame, and encoding spatial information within the first spherical neighborhood to acquire 3D local surface information within the first spherical neighborhood; and matching the 3D local surface information within the first spherical neighborhood with 3D local surface information of a target object to perform 3D shape matching; wherein the step of establishing the local reference frame for the first spherical neighborhood of the feature point comprises: determining the z axis of the local reference frame; projecting a 3D point set P within the first spherical neighborhood to a plane L orthogonal to the z axis to obtain a projected point set P′, wherein P={p.sub.1, p.sub.2, p.sub.3, . . . . . . , p.sub.n}, P′={p′.sub.1, p′.sub.2, p′.sub.3, . . . . . . , p′.sub.n}, n is the number of 3D points within the first spherical neighborhood, and the plane L is a plane located at z=0; performing feature transformation on the projected point set P′ according to the following formula to acquire a point distribution T provided with a larger variance in certain one direction than the projected point set P′:
T.sub.i=W.sub.i(p′.sub.i−p)+p, wherein the parameter W.sub.i in the feature transformation is determined by at least one of a first parameter w1.sub.i, a second parameter w2.sub.i, and a third parameter w3.sub.i, wherein the first parameter w1.sub.i is associated with a distance from the 3D point p.sub.i to the feature point p, the second parameter w2.sub.i is associated with a distance from the 3D point p.sub.i to the projected point p′.sub.i, and the third parameter w3.sub.i is associated with an average distance
2. The method for 3D shape matching according to claim 1, wherein the step of determining the z axis of the local reference frame comprises: acquiring a 3D point set P.sub.z within a second spherical neighborhood, wherein an origin of the second spherical neighborhood coincides with the feature point p and the second spherical neighborhood has a calculation radius of R.sub.z, wherein P.sub.z={q.sub.1, q.sub.2, q.sub.3, . . . . . . , q.sub.m}, and m is the number of 3D points within the second spherical neighborhood; performing eigenvalue decomposition on a covariance matrix cov(P.sub.z) of the 3D point set P.sub.z according to the following formula to determine an eigenvector v corresponding to the minimum eigenvalue of the covariance matrix cov(P.sub.z):
3. The method for 3D shape matching according to claim 2, wherein the calculation radius R.sub.z is not equal to the support radius R.
4. The method for 3D shape matching according to claim 2, wherein the step of determining the calculation radius R.sub.z comprises: acquiring an average grid resolution scene.mr of the real scene and an average grid resolution model.mr of the target object; determining a radius scale factor δ according to the average grid resolution scene.mr of the real scene and the average grid resolution model.mr of the target object, wherein the radius scale factor δ is determined as follows:
5. The method for 3D shape matching according to claim 4, wherein the method, before determining the calculation radius R.sub.z of the real scene, further comprises: predetermining at least two radius scale factors, and predetermining local reference frames and 3D local feature descriptors corresponding to the at least two radius scale factors; storing the predetermined at least two radius scale factors and the predetermined 3D local feature descriptors at different locations of a hash table.
6. The method for 3D shape matching according to claim 5, wherein the method further comprises: looking up the at least two radius scale factors in the hash table by using the radius scale factor δ determined according to the average grid resolution scene.mr of the real scene and the average grid resolution model.mr of the target object, and determining a 3D local feature descriptor corresponding to one scale factor in the hash table as the final 3D local feature descriptor, wherein the one scale factor in the hash table is most approaches the radius scale factor δ.
7. The method for 3D shape matching according to claim 1, wherein the parameter W.sub.i in the feature transformation is determined by a product of any two of the first parameter w1.sub.i, the second parameter w2.sub.i, and the third parameter w3.sub.i.
8. The method for 3D shape matching according to claim 1, wherein the parameter W.sub.i in the feature transformation is determined by a product of the first parameter w1.sub.i, the second parameter w2.sub.i, and the third parameter w3.sub.i.
9. The method for 3D shape matching according to claim 1, wherein the first parameter w1.sub.i and the distance from the 3D point p.sub.i to the feature point p are required to satisfy the following relationship:
w1.sub.i=R−∥p.sub.i−p∥.
10. The method for 3D shape matching according to claim 1, wherein the second parameter w2.sub.i and the distance from the 3D point p.sub.i to the projected point p′.sub.i are required to satisfy the following relationship:
11. The method for 3D shape matching according to claim 1, wherein the third parameter w3.sub.i and the average distance
12. A method for 3D shape matching based on a local reference frame, comprising: acquiring a 3D point cloud of a target object; acquiring a feature point p of the 3D point cloud of the target object; establishing a local reference frame for a first spherical neighborhood of the feature point p, wherein an origin of the first spherical neighborhood coincides with the feature point p and the first spherical neighborhood has a support radius of R, and an origin of the local reference frame coincides with the feature point p and the local reference frame have an orthogonal and normalized x axis, y axis, and z axis; establishing a 3D local feature descriptor based on the local reference frame, and encoding spatial information within the first spherical neighborhood to acquire 3D local surface information within the first spherical neighborhood; and matching the 3D local surface information within the first spherical neighborhood with 3D local surface information of a scene to perform 3D shape matching; wherein the step of establishing the local reference frame for the first spherical neighborhood of the feature point comprises: determining the z axis of the local reference frame; projecting a 3D point set P within the first spherical neighborhood to a plane L orthogonal to the z axis to obtain a projected point set P′, wherein P={p.sub.1, p.sub.2, p.sub.3, . . . . . . , p.sub.n}, P′={p′.sub.1, p′.sub.2, p′.sub.3, . . . . . . , p′.sub.n}, n is the number of 3D points within the first spherical neighborhood, and the plane L is a plane located at z=0; performing feature transformation on the projected point set P′ according to the following formula to acquire a point distribution T provided with a larger variance in certain one direction than the projected point set P′:
T.sub.i=W.sub.i(p′.sub.i−p)+p, wherein the parameter W.sub.i in the feature transformation is determined by at least one of a first parameter w1.sub.i, a second parameter w2.sub.i, and a third parameter w3.sub.i, wherein the first parameter w1.sub.i is associated with a distance from the 3D point p.sub.i to the feature point p, the second parameter w2.sub.i is associated with a distance from the 3D point p.sub.i to the projected point p′.sub.i, and the third parameter w3.sub.i is associated with an average distance
13. The method for 3D shape matching according to claim 12, wherein the step of determining the z axis of the local reference frame comprises: acquiring a 3D point set P.sub.z within a second spherical neighborhood, wherein an origin of the second spherical neighborhood coincides with the feature point p and the second spherical neighborhood has a calculation radius of R.sub.z, wherein P.sub.z={q.sub.1, q.sub.2, q.sub.3, . . . . . . , q.sub.m}, and m is the number of 3D points within the second spherical neighborhood; performing eigenvalue decomposition on a covariance matrix cov(P.sub.z) of the 3D point set P.sub.z according to the following formula to determine an eigenvector v corresponding to the minimum eigenvalue of the covariance matrix cov(P.sub.z):
14. The method for 3D shape matching according to claim 13, wherein the step of determining the calculation radius R.sub.z comprises: acquiring an average grid resolution scene.mr of the real scene and an average grid resolution model.mr of the target object; determining a radius scale factor δ according to the average grid resolution scene.mr of the real scene and the average grid resolution model.mr of the target object, wherein the radius scale factor δ is determined as follows:
15. The method for 3D shape matching according to claim 12, wherein the parameter W.sub.i in the feature transformation is determined by a product of any two of the first parameter w1.sub.i, the second parameter w2.sub.i, and the third parameter w3.sub.i.
16. The method for 3D shape matching according to claim 12, wherein the parameter W.sub.i in the feature transformation is determined by a product of the first parameter w1.sub.i, the second parameter w2.sub.i, and the third parameter w3.sub.i.
17. A device for 3D shape matching based on a local reference frame, comprising an acquisition apparatus, a memory and a processor, wherein the acquisition apparatus is configured to acquire a 3D point cloud of a real scene, a computer program is stored in the memory, and the processor, when executing the computer program, implements the following operations of: acquiring a feature point p of the 3D point cloud of the real scene; establishing a local reference frame for a first spherical neighborhood of the feature point p, wherein an origin of the first spherical neighborhood coincides with the feature point p and the first spherical neighborhood has a support radius of R, and an origin of the local reference frame coincides with the feature point p and the local reference frame have an orthogonal and normalized x axis, y axis, and z axis; establishing a 3D local feature descriptor based on the local reference frame, and encoding spatial information within the first spherical neighborhood to acquire 3D local surface information within the first spherical neighborhood; and matching the 3D local surface information within the first spherical neighborhood with 3D local surface information of a target object to perform 3D shape matching; wherein the step of establishing the local reference frame for the first spherical neighborhood of the feature point comprises: determining the z axis of the local reference frame; projecting a 3D point set P within the first spherical neighborhood to a plane L orthogonal to the z axis to obtain a projected point set P′, wherein P={p.sub.1, p.sub.2, p.sub.3, . . . . . . , p.sub.n}, P′={p′.sub.1, p′.sub.2, p′.sub.3, . . . . . . , p′.sub.n}, n is the number of 3D points within the first spherical neighborhood, and the plane L is a plane located at z=0; performing feature transformation on the projected point set P′ according to the following formula to acquire a point distribution T provided with a larger variance in certain one direction than the projected point set P′:
T.sub.i=W.sub.i(p′.sub.i−p)+p, wherein the parameter W.sub.i in the feature transformation is determined by at least one of a first parameter w1.sub.i, a second parameter w2.sub.i, and a third parameter w3.sub.i, wherein the first parameter w1.sub.i is associated with a distance from the 3D point p.sub.i to the feature point p, the second parameter w2.sub.i is associated with a distance from the 3D point p.sub.i to the projected point p′.sub.i, and the third parameter w3.sub.i is associated with an average distance
18. The device for 3D shape matching according to claim 17, wherein the step, executed by the processor, of determining the z axis of the local reference frame comprises: acquiring a 3D point set P.sub.z within a second spherical neighborhood, wherein an origin of the second spherical neighborhood coincides with the feature point p and the second spherical neighborhood has a calculation radius of R.sub.z, wherein P.sub.z={q.sub.1, q.sub.2, q.sub.3, . . . . . . , q.sub.m}, and m is the number of 3D points within the second spherical neighborhood; performing eigenvalue decomposition on a covariance matrix cov(P.sub.z) of the 3D point set P.sub.z according to the following formula to determine an eigenvector v corresponding to the minimum eigenvalue of the covariance matrix cov(P.sub.z):
19. The device for 3D shape matching according to claim 18, wherein the step, executed by the processor, of determining the calculation radius R.sub.z comprises: acquiring an average grid resolution scene.mr of the real scene and an average grid resolution model.mr of the target object; determining a radius scale factor δ according to the average grid resolution scene.mr of the real scene and the average grid resolution model.mr of the target object, wherein the radius scale factor δ is determined as follows:
20. The device for 3D shape matching according to claim 17, wherein the parameter W.sub.i in the feature transformation is determined by a product of the first parameter w1.sub.i, the second parameter w2.sub.i, and the third parameter w3.sub.i.
Description
DESCRIPTION OF THE DRAWINGS
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
DETAILED DESCRIPTION
[0040] In order to make the objections, technical solutions, and advantages of the present application clearer, the present application is further described below in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are only used to illustrate the present application, and are not used to limit the present application.
[0041] Unless otherwise defined, all technical terms and scientific terms used in this specification have the same meanings as commonly understood by those skilled in the art of the present application. The terms used in the specification of the present application are only aimed to describe specific embodiments, but not to limit the present application. The term “and/or” used in this specification includes any and all combinations of one or more related listed items.
[0042] In addition, the terms “first”, “second”, etc. are only used for descriptive purposes, and cannot be understood as indicating or implying the number or relative importance of a technical feature. The specific embodiments of the present application are described below, and the technical features involved in the described different embodiments may be combined with each other as long as they do not conflict with each other.
[0043] As is well-known, 3D point cloud records a surface of a scene or an object in the form of points after scanning the scene or the object, and each of the points is provided with a three-dimensional coordinate. The 3D shape matching is to match a surface of a scene or an object represented by 3D point data with another or more surfaces of scenes or objects represented by 3D point data, so as to further achieve a result of 3D object recognition.
[0044] According to the first aspect of the present application, in an embodiment as shown in
[0045] acquiring a 3D point cloud of a real scene;
[0046] acquiring a feature point p of the 3D point cloud of the real scene;
[0047] establishing a local reference frame for a first spherical neighborhood of the feature point p, here an origin of the first spherical neighborhood coincides with the feature point p and the first spherical neighborhood has a support radius of R, and an origin of the local reference frame coincides with the feature point p and the local reference frame have an orthogonal and normalized x axis, y axis, and z axis;
[0048] establishing a 3D local feature descriptor based on the local reference frame, and encoding spatial information within the first spherical neighborhood to acquire 3D local surface information within the first spherical neighborhood; and
[0049] matching the 3D local surface information within the first spherical neighborhood with 3D local surface information of a target object to perform 3D shape matching.
[0050] In this embodiment, the real scene may be any scene in real life, especially in industrial applications. The present application does not make specific restrictions on the application scene, as long as it is a scene that requires a 3D shape matching or 3D recognition method. In this embodiment, the 3D point cloud may be acquired in real time, and the 3D point cloud of the target object may be pre-stored, i.e., the target object may be a model used to match the same object in the real scene. That is to say, in this embodiment, the 3D local surface information of the 3D point cloud acquired by real-time measurement of the real scene can be matched with the 3D local surface information acquired by calculating the 3D point cloud of the pre-stored target object, so as to achieve recognition of a shape matching the model of the target object from the 3D point cloud of the real scene.
[0051] In this embodiment, the feature point is also called a key point or a point of interest, that is, a feature point provided with a specific shape. The feature points in the 3D point cloud may be acquired by using a method based on a fixed-scale and the method based on an adaptive-scale, or the feature points may be acquired by using any other existing technology, which is not limited herein.
[0052] In this embodiment, the 3D local feature descriptor may be any local feature descriptor established based on the local reference frame of the present application, for example, any existing local feature descriptor based on the GA method, which is not limited in the present application.
[0053] In an embodiment, as shown in
[0054] determining the z axis of the local reference frame;
[0055] projecting a 3D point set P within the first spherical neighborhood to a plane L orthogonal to the z axis to obtain a projected point set P′ as shown in
[0056] performing feature transformation on the projected point set P′ according to the following formula to acquire a point distribution T provided with a larger variance in certain one direction than the projected point set P′:
T.sub.i=W.sub.i(p′.sub.i−p)+p,
where the parameter W.sub.i in the feature transformation is determined by at least one of a first parameter w1.sub.i, a second parameter w2.sub.i, and a third parameter w3.sub.i, where the first parameter w1.sub.i is associated with the distance from the 3D point p.sub.i to the feature point p, the second parameter w2.sub.i is associated with the distance from the 3D point p.sub.i to the projected point p′.sub.i, and the third parameter w3.sub.i is associated with the average distance
[0057] performing eigenvalue decomposition on a covariance matrix cov(T) of the point distribution T according to the following formula to determine an eigenvector v′ corresponding to a maximum eigenvalue of the covariance matrix cov(T):
and performing sign disambiguation on the eigenvector v′ corresponding to the maximum eigenvalue according to the following definition to determine the x axis of the local reference frame:
and
[0058] determining a cross product of the z axis and the x axis as the y axis of the local reference frame.
[0059] It is worth noting that the point set P′ as a whole is more stable in this direction if the variance of the point set P′ in the certain one direction is greater. The x axis of the local reference frame should be a coordinate axis that makes the point set P′ more stable in the x axis direction, therefore the local reference frame acquired by the above method is more robust.
[0060] In this embodiment, the point distribution T provided with a larger variance in the certain direction than the projected point set P′ is acquired by performing planar projection and feature transformation on the neighborhood points within the neighborhood of the feature point of the 3D point cloud, and the local reference frame established by analyzing the point distribution T provided with the larger variance in the certain one direction is repeatable, robust and anti-noise.
[0061] In this embodiment, the first parameter w1.sub.i associated with the distance from the 3D point p.sub.i to the feature point p may be used to reduce the influence of occlusion and clutter on the projected point set P′, the second parameter w2.sub.i associated with the distance from the 3D point p.sub.i to the projected point p′.sub.i may be used to make the point distribution of the projected point set P′ more characteristic, and the third parameter w3.sub.i associated with the average distance
[0062] As a preferred embodiment, the first parameter w1.sub.i and the distance from the 3D point p.sub.i to the feature point p are required to satisfy the following relationship:
w1.sub.i=R−∥p.sub.i−p∥.
[0063] As a preferred embodiment, the second parameter w2.sub.i and the distance from the 3D point p.sub.i to the projected point p′.sub.i are required to satisfy the following relationship:
where H={h.sub.i}, and σ represents a standard deviation of the above Gaussian function.
[0064] As a preferred embodiment, the standard deviation σ may be: σ=max(H)/9.
[0065] As a preferred embodiment, the third parameter w3.sub.i and the average distance
where r is the number of the 1-ring neighboring points, and s is a constant.
[0066] As an example, there are r neighborhood points p.sub.i1, p.sub.i2, . . . . . . , p.sub.ir of a certain 3D point p.sub.i in its 1-ring neighborhood. As shown in
[0067] As a preferred embodiment, the constant s may be equal to 4.
[0068] As a preferred embodiment, the parameter W.sub.i in the feature transformation may be commonly determined by a product of any two of the first parameter w1.sub.i, the second parameter w2.sub.i, and the third parameter w3.sub.i. For example, the point distribution T provided with the larger variance in the certain one direction may have the following plurality of forms: T.sub.i=w1.sub.iw2.sub.i(p′.sub.i−p)+p, T.sub.i=w1.sub.iw3.sub.i(p′.sub.i−p)+p, or T.sub.i=w2.sub.iw3.sub.i(p′.sub.i−p)+p.
[0069] As a preferred embodiment, the parameter W.sub.i in the feature transformation may be commonly determined by a product of the first parameter w1.sub.i, the second parameter w2.sub.i, and the third parameter w3.sub.i. For example, the point distribution T provided with the larger variance in the certain direction may be: T.sub.i=w1.sub.iw2.sub.iw3.sub.i (p′.sub.i−p)+p.
[0070] In the above-mentioned preferred embodiment, the more factors used to determine the point distribution T provided with the larger variance in the certain direction, the better the technical effect, and the more robust the acquired local reference frame.
[0071] In an embodiment, as shown in
[0072] acquiring a 3D point set P.sub.z within a second spherical neighborhood, where an origin of the second spherical neighborhood coincides with the feature point p and the second spherical neighborhood has a calculation radius of R.sub.z, where P.sub.z={q.sub.1, q.sub.2, q.sub.3, . . . . . . , q.sub.m}, and m is the number of 3D points within the second spherical neighborhood;
[0073] performing eigenvalue decomposition on a covariance matrix cov(P.sub.z) of the 3D point set P.sub.z as shown in the following formula to determine an eigenvector v corresponding to the minimum eigenvalue of the covariance matrix cov(P.sub.z):
where
[0074] performing sign disambiguation on the eigenvector v corresponding to the minimum eigenvalue according to the following definition to determine the z axis of the local reference frame:
where n.sub.j is a normal vector of the 3D point q.sub.j.
[0075] As a preferred embodiment, the calculation radius R.sub.z may be not equal to the support radius R, so that the z axis of the local reference frame is more robust to occlusion and clutter.
[0076] As a preferred embodiment, the calculated radius R.sub.z is equal to one third of the support radius R.
[0077] Because different 3D grid resolutions will lead to the 3D point clouds acquired with different densities during actual acquisition of the 3D point clouds, the larger the grid resolution, the larger the scale of the 3D point cloud, and the greater the number of 3D points on a surface of a scene or an object in the same space. Moreover, when the grid resolution of the object model is lower than that of the scene, the neighborhood points acquired in the real scene will be less than the neighborhood points of the model by using the same radius. Further, when the points are very sparse, the performance of the 3D shape matching will be greatly negatively affected and thus become very poor if the z axis of the local reference frame of the scene is calculated by using a relatively small radius of the neighborhood. Therefore, the present application has proposed an adaptive scale factor which is used to determine the calculation radius R.sub.z, so that the acquired z axis is not only robust to occlusion, but also robust to different grid samplings. In an embodiment, as shown in
[0078] acquiring an average grid resolution scene.mr of the real scene and an average grid resolution model.mr of the target object; determining a radius scale factor δ according to the average grid resolution scene.mr of the real scene and the average grid resolution model.mr of the target object, where the radius scale factor δ is determined as follows:
where C is a constant;
[0079] determining the calculation radius R.sub.z as R.sub.z=δR.
[0080] In this embodiment, the calculation radius used to calculate the z axis of the local reference frame is configured to be adaptively adjusted according to the grid resolution, so that the established local reference frame can be hardly affected by the grid resolution.
[0081] As a preferred embodiment, the constant C may be equal to 3.
[0082] In an embodiment, the method includes the basic technical features of the foregoing embodiment, and the method, on the basis of the foregoing embodiment, may further include the following steps before determining the calculation radius R.sub.z of the real scene:
[0083] predetermining at least two radius scale factors, and predetermining local reference frames and 3D local feature descriptors corresponding to the at least two radius scale factors;
[0084] storing the predetermined at least two radius scale factors and the predetermined 3D local feature descriptors at different locations of a hash table.
[0085] In an embodiment, the method includes the basic technical features of the foregoing embodiment, and the method, on the basis of the foregoing embodiment, may further include:
[0086] looking up the at least two radius scale factors in the hash table by using the radius scale factor δ determined according to the average grid resolution scene.mr of the real scene and the average grid resolution model.mr of the target object, and determining a 3D local feature descriptor corresponding to one scale factor in the hash table as the final 3D local feature descriptor, where the one scale factor in the hash table is most approaches the radius scale factor δ.
[0087] According to the second aspect of the present application, an embodiment of the present application proposes a method for 3D shape matching based on a local reference frame, and the method may include:
[0088] acquiring a 3D point cloud of an target object;
[0089] acquiring a feature point p of the 3D point cloud of the target object;
[0090] establishing a local reference frame for a first spherical neighborhood of the feature point p, here an origin of the first spherical neighborhood coincides with the feature point p and the first spherical neighborhood has a support radius of R, and an origin of the local reference frame coincides with the feature point p and the local reference frame have an orthogonal and normalized x axis, y axis, and z axis;
[0091] establishing a 3D local feature descriptor based on the local reference frame, and encoding spatial information within the first spherical neighborhood to acquire 3D local surface information within the first spherical neighborhood; and
[0092] matching the 3D local surface information within the first spherical neighborhood with 3D local surface information of an scene to perform 3D shape matching;
[0093] among the above steps, the step of establishing the local reference frame for the first spherical neighborhood of the feature point may include:
[0094] determining the z axis of the local reference frame;
[0095] projecting a 3D point set P within the first spherical neighborhood to a plane L orthogonal to the z axis to obtain a projected point set P′, where P={p.sub.1, p.sub.2, p.sub.3, . . . . . . , p.sub.n}, P′={p′.sub.1, p′.sub.2, p′.sub.3, . . . . . . , p′.sub.n}, n is the number of 3D points within the first spherical neighborhood, and the plane L is a plane located at z=0;
[0096] performing feature transformation on the projected point set P′ according to the following formula to acquire a point distribution T provided with a larger variance in certain one direction than the projected point set P′:
T.sub.i=W.sub.i(p′.sub.i−p)+p,
where the parameter W.sub.i in the feature transformation is determined by at least one of a first parameter w1.sub.i, a second parameter w2.sub.i, and a third parameter w3.sub.i, where the first parameter w1.sub.i is associated with the distance from the 3D point p.sub.i to the feature point p, the second parameter w2.sub.i is associated with the distance from the 3D point p.sub.i to the projected point p′.sub.i, and the third parameter w3.sub.i is associated with the average distance
[0097] performing eigenvalue decomposition on a covariance matrix cov(T) of the point distribution T according to the following formula to determine an eigenvector v′ corresponding to a maximum eigenvalue of the covariance matrix cov(T):
and performing sign disambiguation on the eigenvector v′ corresponding to the maximum eigenvalue according to the following definition to determine the x axis of the local reference frame:
and
[0098] determining a cross product of the z axis and the x axis as the y axis of the local reference frame.
[0099] The steps of the embodiments of the second aspect of the present application are similar to the steps of the embodiments of the first aspect, except that the 3D point cloud of the target object is pre-stored and the 3D point cloud of the scene may also be pre-stored after being acquired. That is to say, in this method, the 3D local surface information acquired by calculating the 3D point cloud of the pre-stored target object may be matched with the 3D local surface information acquired by calculating the 3D point cloud of the scene, so as to realize recognition of a shape matching the model of the target object from the 3D point cloud of the scene. For other technical features of the second aspect of the present application, reference may be made to the technical features in the specific embodiments of the first aspect of the present application, which will not be repeated herein again.
[0100] According to the third aspect of the present application, in an embodiment as shown in
[0101] According to the fourth aspect of the present application, an embodiment proposes a device for 3D shape matching based on a local reference frame, which includes a memory and a processor. Among them, a computer program is stored in the memory, and the processor, when executing the computer program, implements the embodiments of the methods described in the first aspect or the second aspect of the present application. For other technical features of the fourth aspect of the present application, reference may be made to the technical features in the specific embodiments of the first, second or third aspect of the present application, which will not be repeated herein again.
[0102] The specific embodiments of the present application described above do not constitute a limitation on the protection scope of the present application. Any amendment, equivalent replacement and improvement made within the principles of the present application shall be included in the protection scope of the present application.