METHOD AND SYSTEM FOR MEASURING DIMENSIONS OF A TARGET OBJECT
20170302905 ยท 2017-10-19
Inventors
- Alexander SHTEINFELD (Newton, MA, US)
- Richard LYDON (Hanover, MA, US)
- George LIU (Dover, MA, US)
- Udrekh GAVALE (Natick, MA, US)
Cpc classification
International classification
Abstract
The invention relates to a method for measuring dimensions of a target object. The method comprises acquiring depth data representative of the physical space, the depth data comprising data of the target object, converting the depth data into a point cloud, extracting at least one plane from the point cloud, identifying a ground plane, eliminating the ground plane from the point cloud, extracting at least one point cluster from the remaining point cloud, identifying a point cluster of the target object, estimating dimensions of the target object based on the point cluster of the target object.
Claims
1. A method for measuring dimensions of a target object, comprising: acquiring depth data representative of the physical space, the depth data comprising data of the target object, converting the depth data into a point cloud, extracting at least one plane from the point cloud, identifying a ground plane, eliminating the ground plane from the point cloud, extracting at least one point cluster from the remaining point cloud, identifying a point cluster of the target object, estimating a volume of the target object based on the point cluster of the target object.
2. The method of claim 1, further comprising removing a background from the point cloud, before the ground plane is identified.
3. The method of claim 2, wherein a point of the point cloud is defined as background, if its value in at least one dimension is greater than a predetermined threshold.
4. The method of claim 1, wherein identifying the ground plane comprises repeatedly identifying a plane in the point cloud, storing the identified plane and temporarily removing the identified plane from the point cloud, wherein the ground plane is identified among all stored and identified planes after all planes have been identified, stored and removed from the point cloud.
5. The method of claim 4, wherein a RANSAC algorithm is used to identify planes in the point cloud.
6. The method of claim 1, wherein identifying the point cluster of the target object comprises identifying the closest point cluster as the point cluster of the target object.
7. The method of claim 6, wherein for each point cluster a centroid point is calculated, wherein a distance of each point cluster is calculated based on the corresponding centroid point.
8. The method of claim 6, wherein the point clusters are identified using a predefined minimum distance between point clusters and/or a predefined minimum cluster size.
9. The method of claim 1, wherein a coordinate system having an x-, y- and z-direction is used, wherein each of the x-, y- and z-direction is perpendicular to both other directions and wherein the point cluster of the target object is rotated such that the ground plane would become parallel to a z=0 plane.
10. The method of claim 9, wherein a height of the target object is calculated after the point cluster of the target object is rotated.
11. The method of claim 1, wherein a minimum enclosing flat geometric shape is calculated for the point cluster of the target object.
12. The method of claim 11, wherein the point cluster of the target object is projected onto the ground plane or onto a z=0 plane before the minimum enclosing flat geometric shape is calculated.
13. The method of claim 1, wherein the depth data is acquired using a handheld 3D-vision device.
14. The method of claim 13, wherein the depth data is transferred from the handheld 3D-vision device to a portable processing device.
15. The method of claim 1, wherein the target object is a pallet load.
16. The method of claim 1, wherein a barcode is identified and evaluated, wherein the barcode contains information about the target object.
17. A system for measuring dimensions of a target object, the system comprising: a 3D-vision device for acquiring depth data representative of the physical space, the depth data comprising data of the target object and a processing device for processing said depth data, wherein the processing device is configured to convert the depth data into a point cloud, extract at least one plane from the point cloud, identify a ground plane, eliminate the ground plane from the point cloud, extract at least one point cluster from the remaining point cloud, identify a point cluster of the target object, estimate dimensions of the target object based on the point cluster of the target object.
18. The system of claim 17, wherein the 3D-vision device is a handheld 3D-vision device and wherein the 3D-vision device and the processing device share one single housing.
Description
[0071] Various features and advantages of the present invention will become more apparent from the following description and accompanying drawing wherein:
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078]
[0079] Referring to the Figures, wherein like numerals indicate corresponding parts throughout the several figures and aspects of the invention,
[0080] In step 100 a 3D-camera system performs a three-dimensional video-scanning of a scene including a target object thereby providing depth data, comprising the target object.
[0081] In step 110 the depth data is converted into a stitched point cloud.
[0082] Step 120 represents the removal of points from the point cloud, wherein the removed points correspond to the background. That is, step 120 represents a background filtering.
[0083] In step 130 a multi-plane segmentation takes place. In other words, at least one plane is extracted from the point cloud. Usually 4 to 5 planes can be extracted from the point cloud. The extracted planes are temporarily removed from the point cloud, wherein a ground plane is identified among the extracted planes.
[0084] In step 140 the ground plane is eliminated from all detected planes of the point cloud.
[0085] In step 150 point clusters are extracted from the remaining point cloud, wherein a point cluster of the target object is identified among the point clusters. The point cluster of the target object is then separated from the point cloud.
[0086] The point cluster of the target object is rotated in step 160 such that a plane of the point cluster of the target object that is parallel to the ground plane becomes parallel to a z=0 plane. Furthermore, a height of the target object is calculated.
[0087] In step 170 the point cluster of the target object is projected onto the z=0 plane.
[0088] From the projected points an enclosing rectangle is calculated and the dimensions of the target object are estimated.
[0089] Turning now to
[0090] From the reflected light the camera 10 derives depth data including depth data from the target object 16 and the further object 20. The depth data is transmitted wirelessly to a processing unit 22.
[0091] The processing unit 22 transforms the depth data into a stitched point cloud 24. An exemplary point cloud 24 corresponding to such depth data is shown in
[0092] In the point cloud 24 several planes are identified. In the example of
[0093] The processing unit 22 then identifies the plane 26a as the ground plane, based on the finding that the plane 26a spreads through the whole point cloud and has a normal vector in z-direction. The coordinate system in x-, y- and z-direction is shown in
[0094] The processing unit 22 then removes the ground plane 26a from the point cloud 24. The remaining point cloud 24 is shown in
[0095] As shown in
[0096] The point cluster 28a of the target object is then separated, as shown in
[0097] The point cluster 28a of the target object is then projected onto the z=0 plane (i.e. a plane in which all points have a z-value of zero) as shown in
[0098] The result of the measurement of the dimensions of the target object 16 can be shown on a screen of the processing unit 22.
[0099] While the present disclosure has been described in connection with specific implementations, explicitly set forth herein, other aspects and implementations will be apparent to those skilled in the art from consideration of the specification disclosed herein. Therefore, the present disclosure should not be limited to any single aspect, but rather construed in breadth and scope in accordance with the appended claims. For example, the various procedures described herein may be implemented with hardware, software, or a combination of both.
REFERENCE NUMERALS
[0100] 10 camera [0101] 14 reflected light [0102] 16 target object [0103] 18 pallet [0104] 20 further object [0105] 22 processing unit [0106] 24 point cloud [0107] 26a-26e planes [0108] 28a-28c point cluster [0109] 30 centroid point [0110] 32 rectangle [0111] 100 acquiring 3D depth data [0112] 110 convert depth data to point cloud [0113] 120 background filtering [0114] 130 multi plane segmentation [0115] 140 ground floor elimination [0116] 150 cluster extraction [0117] 160 rotation [0118] 170 enclosing rectangle calculation