METHOD AND A DEVICE FOR ESTIMATING WEIGHT OF FOOD OBJECTS
20220026259 · 2022-01-27
Inventors
Cpc classification
A22C17/002
HUMAN NECESSITIES
B26D5/007
PERFORMING OPERATIONS; TRANSPORTING
A22C17/0073
HUMAN NECESSITIES
G01G9/00
PHYSICS
B26D2210/02
PERFORMING OPERATIONS; TRANSPORTING
A22C17/0086
HUMAN NECESSITIES
International classification
G01G9/00
PHYSICS
Abstract
A method of estimating weight of food objects, includes training an artificial neural network software module, and using the trained artificial neural network software module to provide a weight correlated data estimate for said food object based on a three-dimensional image of the food object
Claims
1.-21. (canceled)
22. A method of estimating weight of food objects, comprising: providing a processor with an artificial neural network software module, capturing three-dimensional (3D) training image data and associated training weight data of a plurality of training food objects by use of a 3D imaging device and a scale, training the artificial neural network software module by use of the training image data and associated weight data, and with the use of the trained artificial neural network software module: capturing a three-dimensional (3D) image of a food object by a 3D imaging device, and using the trained artificial neural network software module and the captured 3D image to provide a weight correlated data estimate for said food object.
23. The method according to claim 22, wherein the food object has a non-uniform density, and wherein the artificial neural network software module is trained to identify the non-uniform density and to provide the weight correlated data estimate based on the non-uniform density.
24. The method according to claim 22, wherein the artificial neural network software module is trained to identify a characteristic shape in the 3D image, and determine the weight correlated data estimate based on the determined shape.
25. The method according to claim 22, wherein the artificial neural network software module is trained to identify a characteristic surface texture in the 3D image, and determine the weight correlated data estimate based on the surface texture.
26. The method according to claim 22, wherein there are one or more air-pockets shadowed by said food object and thereby not visible in the 3D image, and wherein the artificial neural network software module is trained to compensate for the air-pockets by the visible shape of said food objects.
27. The method according to claim 22, wherein the 3D-image is captured from above the food objects.
28. The method according to claim 27, wherein the 3D-image is captured in a direction which is essentially perpendicular to a conveyor belt on which the food objects are supported.
29. The method according to claim 22, wherein the image is captured by use of light from a laser light source, and wherein both the laser light source and the 3D imaging device is pointed downwards towards the food objects.
30. The method according to claim 22, wherein the weight correlated data comprises a weight estimate.
31. The method according to claim 22, wherein the weight correlated data comprises a density estimate for the food object.
32. The method according to claim 22, comprising a step of portioning a larger food object to thereby define a plurality of food object, and subsequently capturing the three-dimensional (3D) image of each food object by a 3D imaging device, and using the trained artificial neural network software module and the captured 3D image to provide a weight correlated data estimate for each food object.
33. The method according to claim 22, comprising a step of capturing the three-dimensional (3D) image of a larger food object by a 3D imaging device, and using the trained artificial neural network software module and the captured 3D image to provide a weight correlated data estimate for the larger food object and subsequently portioning the larger food object to thereby define a plurality of food object.
34. The method of claim 33, wherein the portioning of the larger food object is carried out with a weight consideration for each of the plurality of food objects, the weight consideration being based on the weight correlated data estimate for the larger food object.
35. The method according to claim 22, wherein the step of training the artificial neural network software module includes the step of: portioning a larger food object to thereby define a plurality of food objects, acquiring a weight and a 3D image of each of the plurality of food objects, and associating the weight with the 3D image for each of smaller food objects in the artificial neural network software module.
36. The method according to claim 32, wherein each of said plurality of food objects is associated with position data indicating the position of the food objects within said larger food object.
37. The method according to claim 22, wherein the 3D image of each of the food objects is captured before said portioning is performed.
38. The method according to claim 22, wherein the 3D image of each food object is captured after said portioning is performed.
39. A device for providing a weight correlated data estimate for a food object, the device comprising: a 3D imaging device configured to provide three-dimensional (3D) image data of the food object, a processor configured with an artificial neural network software module configured to output the weight correlated data estimate for said food object based on the three-dimensional image data, the artificial neural network software module being trained software module, where the training of the artificial neural network software module is based on collected 3D image data with associated weight data for said similar or identical food species.
40. The device according to claim 39, comprising a 3D imaging device positioned above the food object.
41. The device according to claim 39, comprising only one 3D imaging device.
42. The device according to claim 39, wherein the 3D imaging device is positioned such that an air-pocket may be shadowed by the food object, and wherein the artificial neural network is trained to identify the air-pocket and consider the air-pocket when determining a density and the weight correlated data estimate.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0030] Embodiments will be described, by way of example only, with reference to the drawings, in which
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
DESCRIPTION OF EMBODIMENTS
[0042]
[0043] In step (S1) 201, capturing three dimensional (3D) image data of a food object is performed by a 3D imaging device. The 3D imaging device may comprise a digital camera, a combination of a line laser pointed towards the food object and a camera, where the reflection of light from the surface of the food object is detected by the camera, and where based thereon a 3D profile is created of the food object.
[0044] In step (S2) 202, a processor utilizes the captured 3D image data as input in an artificial neural network software module. As will be discussed in more details later, the artificial neural network software module has previously been trained for similar or identical food species as said food objects based on collected 3D image data with associated weight data for said similar or identical food species.
[0045] In step (S3) 203, a weight correlated data estimate is outputted for said food object. The term weight correlated data may be interpreted as the actual weight estimate in grams or kilograms, or the estimate may be the density estimate.
[0046]
[0047] In step (S1′) 301 the training includes capturing three dimensional (3D) image data of a food object by a 3D imaging device, which can be any kind of imaging device, a camera, line scanner etc..
[0048] In step (S2′) 302 the food object is weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.
[0049] In step (S3′), 303 the captured 3D image data and the weighing data are used as input data, training data, for an artificial neural network software module.
[0050] Steps S1′ to S2′ are then repeated for thousands, hundreds of thousands of objects where the data is stored.
[0051] S3′ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data. After the training, the software module can make highly accurate weight estimates.
[0052] The method steps in the flowchart in
[0053]
[0054]
[0055] In step (S1″) 601, the training includes acquiring three dimensional (3D) image data of a food object by a 3D imaging device, which can be any kind of imaging device, a camera, a line scanner etc..
[0056] In step (S2″) 602, the food object is cut into smaller pieces, e.g. pieces of the same thickness.
[0057] In step (S3″), 603 the smaller pieces are weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.
[0058] In step (S4″) 604, the captured 3D image data and the weighing data are used as input data, i.e. training data, for an artificial neural network software module.
[0059] Steps S1″ to S3″′ are repeated for hundreds or thousands of objects and stored. Step S4″ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data. After the training, the software module can make highly accurate density distribution for such food objects, and thereby highly accurate weight estimate.
[0060] The flowchart in
[0061] Conveyor 709 conveys individual piece to a scale 703 where each piece 712 is weighed. Accordingly, the input data into the artificial neural network includes the 3D image of each individual piece and the associated weight. Additional input data may be position data indicating the position of the individual piece within the object 701.
[0062]
[0063] In step (S1″′) 801, a food object is cut into smaller pieces of e.g. the same thickness.
[0064] In step (S2″′) 802, three dimensional (3D) image data of each of the smaller pieces is captured by a 3D imaging device, which can be any kind of imaging device, a camera, line scanner etc..
[0065] In step (S3″′) 803, the smaller pieces are weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.
[0066] In step (S4″′) 804, the captured 3D image data and the weighing data are used as input data, training data, for an artificial neural network software module.
[0067] Steps S2″′ and S3″′ may just as well be reversed, i.e. S3″′ may be performed prior to step S2″′.
[0068] Steps S1″′ to S3″′ are repeated for hundreds or thousands of objects and stored. Step S4″′ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data.
[0069] The flowchart in
[0070] In
[0071] The same scenario is shown in
[0072]
[0073]
[0074] In this embodiment, the output 1209 is then used to operate a cutting device 1211 to cut the food object into a plurality of pieces 1212 which may e.g. be portions of fixed weight. In this process parameters like differences in the density along the food object is taken into account.
[0075]
[0076] The table in
[0077] While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the disclosure is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed disclosure, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
[0078] The disclosure further provides the following numbered embodiments:
[0079] 1. A method of estimating weights of food objects, comprising: [0080] capturing three dimensional (3D) image data of a food object by a 3D imaging device, utilizing, by a processor, the captured 3D image data as input in an artificial neural network software module, the artificial neural network software module previously being trained for similar or identical food species as said food objects, where the training of the artificial neural network software module is based on collected 3D image data with associated weight data for said similar or identical food species, and based thereon outputting a weight correlated data estimate for said food object.
[0081] 2. The method according to embodiment 1, wherein the weight correlated data comprises a weight estimate.
[0082] 3. The method according to embodiment 1, wherein the weight correlated data comprises a density estimate.
[0083] 4. The method according to any proceeding embodiments, wherein the food object is a portion from a larger food object such that multiple of such portions define the whole larger food object.
[0084] 5. The method according to any of the preceding embodiments, wherein the step of training the artificial neural network software module includes the step of: [0085] cutting said similar or identical food species into smaller pieces, and [0086] acquiring weight of each of the smaller pieces, and [0087] associating the weight with the volume for each of smaller pieces.
[0088] 6. The method according to embodiment 5, wherein each of said smaller pieces is associated with a position data indicating the position of the smaller food pieces within said similar or identical food species.
[0089] 7. The method according to any of the preceding embodiments, wherein the volume of each of the smaller pieces is determined before said cutting is performed.
[0090] 8. The method according to any of the preceding embodiments, wherein the volume of each of the smaller pieces is captured after said cutting is performed.