METHOD OF WEIGHING USING OBJECT RECOGNITION AND DEVICE THEREFOR
20220260410 ยท 2022-08-18
Inventors
- Song Zhang (Changzhou, CN)
- Kan Liu (Changzhou, CN)
- Shenhui Wang (Changzhou, CN)
- Zhiqiang Wang (Changzhou, CN)
- Kai Yin (Changzhou, CN)
Cpc classification
G01N5/00
PHYSICS
G06V20/52
PHYSICS
G01G21/22
PHYSICS
G06V10/22
PHYSICS
G01G19/4144
PHYSICS
International classification
G01G21/22
PHYSICS
Abstract
A method of weighing using a scale (10) comprises the steps of: recognizing at least one of a plurality of objects placed within an object recognition area (A) of a platform (20) of the scale (10), and weighing the plurality of objects placed on the platform (20) of the scale (10) to determine a total weight of the plurality of objects. A weighing device (10) comprises the platform (20) configured as a plane, and utilizes the aforementioned weighing method. The method of weighing is advantageous in that it reduces the difficulty of object recognition using an algorithm by increasing the degree to which the object on the weighing platform fits the algorithm, reduces the complexity of the operation flow and the time required, and effectively increases the precision and accuracy of object recognition.
Claims
1. A method of weighing at least one object, using object recognition with a scale, the method comprising the steps of: recognizing at least one of a plurality of objects placed within an object recognition area of a weigh platform of the scale, and weighing the at least one recognized object to determine a total weight of the at least one recognized object.
2. The method of claim 1, wherein the step of recognizing at least one object comprises: taking a picture of the object recognition area of the weigh platform; and recognizing, using the picture taken, at least one object in the object recognition area.
3. The method of claim 2, further comprising the step of: weighing at least one object placed within the object recognition area.
4. The method of claim 3, further comprising the steps of: sending the picture taken to a training model to recognize the at least one object in the object recognition area; or sending the picture taken and the weight of the at least one object to the training model to recognize the at least one object.
5. The method of claim 4, wherein: the training model is configured to use picture feature comparison to recognize the at least one object in the picture taken; and, optionally, the training model is further configured to compare the weight of the at least one object against a pre-set standard weight to determine if a weight difference is within a predetermined error range.
6. The method of claim 4, wherein constructing the training model comprises: taking pictures of the at least one object placed within the object recognition area on the weigh platform in different angular directions; and sending the pictures of the at least one object to a recognition algorithm for constructing the training model.
7. The method of claim 6, wherein the recognition algorithm constructs the training model for the at least one object using at least one of: weight information of the at least one object, light source information, and shadow information regarding the pictures of the at least one object.
8. The method of claim 1, further comprising at least one of the steps of: outputting the weight of the at least one object and the recognized information of the at least one object; or inputting the weight of the at least one object and the recognized information of the at least one object into an order or a database; or outputting a counted number of the at least one object, after obtaining the counted number by means of the weight of the objects.
9. A weighing device comprising a weigh platform configured as a plane, wherein the weighing device is configured to perform the steps of the method of claim 1.
10. The weighing device of claim 9, wherein the weigh platform comprises an object recognition area that is configured as a protrusion or a marking line along a boundary of the object recognition area, or as a raised portion that is higher than remaining portions of the weigh platform, or as recessed portions separating the object recognition area from the remaining portions of the weigh platform.
11. The weighing device of claim 9, wherein the object recognition area is located at a corner of the weigh platform; or at a center of the weigh platform.
12. The weighing device of claim 11, wherein the weigh platform is of a rectangular or square configuration.
13. The weighing device of claim 11, wherein the object recognition area is configured in the form of a rectangle, square, or circle.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0036]
[0037]
[0038]
DETAILED DESCRIPTION OF EMBODIMENTS
[0039] The present invention is further illustrated below by way of embodiments, but is not thus limited within the scope of these embodiments.
[0040] A platform 20 of a weighing apparatus or a scale 10 of the present invention is divided to have an independent image area or object recognition area A where an object placed within an object recognition area A is recognized, and a plurality of objects on the entire platform 20 are then weighed. The weighing data and the object recognition results may also be used for counting the number of objects, etc., on the platform 20. In addition, the processing of formulation or order information can be realized by combining article information related to the object.
[0041] The present invention will be described in detail with reference to the foregoing embodiments.
[0042] An object recognition area A on a platform 20 of a scale 10 of the embodiment as shown in
[0043] In another embodiment, the object recognition area A is arranged on any one of the other three corners or on one side as shown in
[0044] Further, in an alternative example shown in
[0045] The platform 20 of the scale 10 of this embodiment is configured as a plane as shown in
[0046] In yet another embodiment, the portion of the area A on the platform 20 is integrally raised relative to the other portions of the platform 20 to form a platform area that is higher than the other portions of the platform 20. In a different example, the other portions of the platform 20 are recessed relative to the area A, forming a recessed area lower than the area A.
[0047] Apart from the rectangular shaped platform 20 as shown in
[0048] In yet another embodiment, the field of view of the camera on the platform 20 only covers the object recognition area A and visually recognizes the object placed within the confines of area A, thereby reducing the influence of ambient conditions necessary for the visual recognition apparatus and image acquisition. At the same time, it is also convenient for installing and debugging.
[0049] When performing object recognition, one object may be placed within the area A, or multiple objects such as three or six objects may also be placed within the area A provided on the platform 20. The camera takes a picture of the entire platform 20 or only the area A, and the weight weight_a of the object(s) located within the area A is computed and the weight data is saved.
[0050] The background algorithm recognizes the object(s) placed within the area A on the platform 20 by means of the picture, extracts an image feature and compares same with a previously stored training model to establish a relationship, gives an object matching confidence, and then compares the weight information weight_a with weight information weight_a_s recorded in the model within a pre-determined weight tolerance. The confidence and the weight tolerance are combined with the object recognition result to make a comprehensive determination.
[0051] Since only the object within the area A is recognized, the algorithm reduces the complexity of feature extraction and comparison. In addition, if the picture in the training model is also obtained from the same area A, since the actual imaging effect within the area A is relatively similar to the picture in the model, the algorithm further reduces the difficulty of feature extraction and comparison, and can obtain better recognition results. At the same time, the determination method considering the weight further increases the accuracy.
[0052] Then, an unlimited number of objects can be placed or dumped on the other area B of the platform 20. In this embodiment, the object recognition is performed only on area A, so that there is no requirement for imaging the objects located in area B, as is the case in prior art arrangements. This arrangement allows the operator to use the scale 10 and weigh various types of objects quickly and conveniently.
[0053] Thereafter, a picture of the entire platform 20 is taken; the weight of objects weight_ab on the entire platform 20 is weighed, and the picture along with the weight data weight_a and weight_ab are sent to the background algorithm for processing.
[0054] In the embodiment in which the field of view of the camera only covers the area A, at this time, only the weight weight_ab on the entire platform 20 is determined, and the weight data weight_a and weight_ab is fed as inputs to the background algorithm for processing.
[0055] When the background algorithm determines that the object within the area A on the platform 20 is the desired object, the object information and the weight information weight_ab is directly given as output or is recorded. It is also possible to send the weight data weight_ab on the entire platform 20 to an order to record the total weight of the objects in the order, and to calculate the number of the objects of this type by the relationship between the total weight and the piece weight, and the quantity information is also sent to the order for storage. After the processing is completed, the scale 10 proceeds to the processing of the next type of objects.
[0056] In another embodiment, when performing object recognition, a at least one of a plurality of objects is/are placed within the area A of the platform 20, then the weight weight_a of the object(s) within the area A is determined and the weight data is saved. A plurality of objects are then placed or dumped on the other area B of the platform 20. Thereafter, a picture of the entire platform 20 is taken; weight weight_ab of the platform 20 is determined, and the picture along with the weight data weight_a and weight_ab are sent to the background algorithm for processing.
[0057] The background algorithm preferentially recognizes the object placed in an image of a specified picture that corresponds to the portion of the area A on the platform 20, and in the background the algorithm determines whether the object within the area A of the platform 20 is the desired object. Then the object information and the weight information weight_ab are subjected to subsequent processing, for example, the weight data weight_ab on the entire platform 20 is sent to an order.
[0058] In still another embodiment, a plurality of objects are directly dumped on to the platform 20, while a limited number of objects (such as one, five, or eight objects) are removed from area B and placed within the area A.
[0059] A picture of the entire platform 20 or only the area A is then taken. The weight weight_ab of the entire platform 20 is determined. The picture along with the weight data weight_ab is sent to the background algorithm for processing.
[0060] The background algorithm uses the picture to recognize the object(s) placed within the area A on the platform 20 or directly recognize the object(s) in the picture of area A, extracts image features and compares them with a previously stored training model to establish a relationship, gives the object matching confidence, and then gives the object recognition result.
[0061] Once the background algorithm determines that the object within the area A of the platform 20 is the desired object, the object information and the weight information weight_ab are subjected to subsequent processing, for example, the weight data weight_ab on the entire platform 20 is sent to an order.
[0062] In order to reduce the complexity of feature extraction and comparison by the algorithm, in another embodiment, the training model is also established by utilizing the area A on the platform 20. For creating the model, initially, a limited number of objects, for example one object, or seven or twelve objects, is/are placed within the area A on the platform 20 in a specified posture(s) and position (s), e.g., for a part with three standing faces, only one standing face may be selected, a part is placed in the centre of the area A, with its orientation being perpendicular to the position of the area A, a picture is then taken. The picture thus taken is sent to the recognition algorithm for training the model, and finally the model is created, which contains information such as the object image and the presented posture.
[0063] In another embodiment, while a picture of the object is taken, the object is also weighed simultaneously, and the weight information weight_a_s of the object is obtained and sent to the recognition algorithm for model training.
[0064] In still another embodiment, the ambient information such as the light source and the shadow during the entire process of taking the picture, before or after taking the picture of the object is also sent to the recognition algorithm for model training.
[0065] Although the specific implementations of the present invention are described above, a person skilled in the art should understand that these are only exemplary, and the scope of protection of the present invention is defined by the attached claims. A person skilled in the art can make various changes or modifications to these implementations without departing from the principle and spirit of the present invention, but all the changes or modifications fall within the scope of protection of the present invention.
REFERENCE SIGNS LIST
[0066]
TABLE-US-00001 Area A object recognition area Area B area on the platform that is outside the object recognition area