SENSOR FUSION FOR LINE TRACKING
20220373998 ยท 2022-11-24
Inventors
- Chiara Talignani Landi (Reggio Emilia, IT)
- Hsien-Chung Lin (Fremont, CA, US)
- Tetsuaki Kato (Fremont, CA, US)
- Chi-Keng Tsai (Bloomfield Hills, MI, US)
Cpc classification
G06T7/246
PHYSICS
International classification
G05B19/418
PHYSICS
Abstract
A method for determining a position of an object moving along a conveyor belt. The method includes measuring the position of the conveyor belt while the conveyor belt is moving using a motor encoder and providing a measured position signal of the position of the object based on the measured position of the conveyor belt. The method also includes determining that the conveyor belt has stopped, providing a CAD model of the object and generating a point cloud representation of the object using a 3D vision system. The method then matches the model and the point cloud to determine the position of the object, provides a model position signal of the position of the object based on the matched model and point cloud, and uses the model position signal to correct an error in the measured position signal that occurs as a result of the conveyor belt being stopped.
Claims
1. A method for identifying a position of an object moving along a conveyor belt, said method comprising: measuring the position of the conveyor belt while the conveyor belt is moving; providing a measured position signal of the position of the object based on the measured position of the conveyor belt; determining that the conveyor belt has stopped; providing a model of the object; generating a point cloud representation of the object using a vision system, where the point cloud includes points that identify the location of features on the object; matching the model of the object and the point cloud to determine the position of the object; providing a model position signal of the position of the object based on the matched model and point cloud; and using the model position signal to correct an error in the measured position signal that occurs as a result of the conveyor belt being stopped.
2. The method according to claim 1 wherein measuring the position of the conveyor belt while the conveyor belt is moving includes using a motor encoder.
3. The method according to claim 1 wherein providing a model of the object includes providing a CAD model.
4. The method according to claim 1 wherein generating a point cloud representation of the object includes using a 3D vision system.
5. The method according to claim 4 wherein the 3D vision system includes at least one 3D camera.
6. The method according to claim 5 wherein the at least one 3D camera is a plurality of 3D cameras.
7. The method according to claim 1 wherein matching the model of the object and the point cloud includes using a point cloud matching algorithm.
8. The method according to claim 7 wherein the point cloud matching algorithm is an iterative closest point algorithm.
9. The method according to claim 1 wherein matching the model of the object and the point cloud includes translating and rotating the model to match feature points in the point cloud.
10. The method according to claim 1 wherein the method is performed in a robot system.
11. A method for identifying a position of an object moving along a conveyor belt, said method being performed by a robot system, said method comprising: measuring the position of the conveyor belt while the conveyor belt is moving using a motor encoder; providing a measured position signal of the position of the object based on the measured position of the conveyor belt; determining that the conveyor belt has stopped; providing a CAD model of the object; generating a point cloud representation of the object using a 3D vision system, where the point cloud includes points that identify the location of features on the object; matching the model of the object and the point cloud to determine the position of the object by translating and rotating the model to match feature points in the point cloud; providing a model position signal of the position of the object based on the matched model and point cloud; and using the model position signal to correct an error in the measured position signal that occurs as a result of the conveyor belt being stopped.
12. The method according to claim 11 wherein matching the model of the object and the point cloud includes using an iterative closest point algorithm.
13. A system for identifying a position of an object moving along a conveyor belt, said system comprising: means for measuring the position of the conveyor belt while the conveyor belt is moving; means for providing a measured position signal of the position of the object based on the measured position of the conveyor belt; means for determining that the conveyor belt has stopped; means for providing a model of the object; means for generating a point cloud representation of the object using a vision system, where the point cloud includes points that identify the location of features on the object; means for matching the model of the object and the point cloud to determine the position of the object; means for providing a model position signal of the position of the object based on the matched model and point cloud; and means for using the model position signal to correct an error in the measured position signal that occurs as a result of the conveyor belt being stopped.
14. The system according to claim 13 wherein the means for measuring the position of the conveyor belt while the conveyor belt is moving includes uses a motor encoder.
15. The system according to claim 13 wherein the means for providing a model of the object provides a CAD model.
16. The system according to claim 13 wherein the means for generating a point cloud representation of the object using a vision system uses a 3D vision system.
17. The system according to claim 16 wherein the 3D vision system includes at least one 3D camera.
18. The system according to claim 17 wherein the at least one 3D camera is a plurality of 3D cameras.
19. The system according to claim 13 wherein the means for matching the model of the object and the point cloud uses an iterative closest point algorithm.
20. The system according to claim 13 wherein the means for matching the model of the object and the point cloud translates and rotates the model to match feature points in the point cloud.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0008]
[0009]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0010] The following discussion of the embodiments of the disclosure directed to a robotic system and method for determining the position of an object moving along a conveyor belt that compensates for the backlash error when the conveyor belt stops is merely exemplary in nature, and is in no way intended to limit the invention or its applications or uses.
[0011]
[0012] While the conveyor belt 18 is moving, the position of the car body 16 is being continuously updated using information from the encoder 20. When the conveyor belt 18 stops, the backlash of the belt 18 causes an error in the measurements from the encoder 20 that has to be corrected. During the time that the conveyor belt 18 is stopped, the 3D cameras 22 generate the point cloud that is matched or compared to a CAD model of the car body 16 stored in the controller 24 to compensate for missing points and determine the precise position of the car body 16. The combination of high frequency object position data from the encoder 20 while the belt 18 is moving and low frequency object position data, i.e., matching a point cloud from the 3D cameras 22 and a CAD model of the car body 16, while the belt 18 is stopped allows correction of the measurements from the encoder 20 resulting from belt backlash, and thus precise tracking of the car body 16 on the conveyor belt 18.
[0013]
[0014] The point cloud matching processor 36 provides low frequency position data of the car body 16 that is obtained when the conveyor belt 18 is stopped and the measurements from the encoder 40 provide high frequency position data of the car body 16 while the conveyor belt 18 is moving. Thus, when the conveyor belt 18 is moving, no data is being provided to the error compensation processor 38 from the matching processor 36 and the encoder measurements alone provide the position of the car body 16 on the conveyor belt 18. When the conveyor belt 18 stops, which can be identified by the controller 24 in any suitable manner, and the last position of the conveyor belt 18 provided by the encoder measurements is not accurate because of lurching when the belt 18 stops, the point cloud matching process is performed to correct the measurements from the encoder 40 so that when the belt 18 starts moving again the measurements from the encoder 40 will be accurate. Thus, objects on the conveyor belt 18 are represented by their complex shapes and they are not approximated with simple shapes, hence operations like interior painting, welding or screwing can be accurately performed.
[0015] The foregoing discussion discloses and describes merely exemplary embodiments of the present disclosure. One skilled in the art will readily recognize from such discussion and from the accompanying drawings and claims that various changes, modifications and variations can be made therein without departing from the spirit and scope of the disclosure as defined in the following claims.