METHOD AND A DEVICE FOR ESTIMATING WEIGHT OF FOOD OBJECTS

20220026259 · 2022-01-27

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of estimating weight of food objects, includes training an artificial neural network software module, and using the trained artificial neural network software module to provide a weight correlated data estimate for said food object based on a three-dimensional image of the food object

    Claims

    1.-21. (canceled)

    22. A method of estimating weight of food objects, comprising: providing a processor with an artificial neural network software module, capturing three-dimensional (3D) training image data and associated training weight data of a plurality of training food objects by use of a 3D imaging device and a scale, training the artificial neural network software module by use of the training image data and associated weight data, and with the use of the trained artificial neural network software module: capturing a three-dimensional (3D) image of a food object by a 3D imaging device, and using the trained artificial neural network software module and the captured 3D image to provide a weight correlated data estimate for said food object.

    23. The method according to claim 22, wherein the food object has a non-uniform density, and wherein the artificial neural network software module is trained to identify the non-uniform density and to provide the weight correlated data estimate based on the non-uniform density.

    24. The method according to claim 22, wherein the artificial neural network software module is trained to identify a characteristic shape in the 3D image, and determine the weight correlated data estimate based on the determined shape.

    25. The method according to claim 22, wherein the artificial neural network software module is trained to identify a characteristic surface texture in the 3D image, and determine the weight correlated data estimate based on the surface texture.

    26. The method according to claim 22, wherein there are one or more air-pockets shadowed by said food object and thereby not visible in the 3D image, and wherein the artificial neural network software module is trained to compensate for the air-pockets by the visible shape of said food objects.

    27. The method according to claim 22, wherein the 3D-image is captured from above the food objects.

    28. The method according to claim 27, wherein the 3D-image is captured in a direction which is essentially perpendicular to a conveyor belt on which the food objects are supported.

    29. The method according to claim 22, wherein the image is captured by use of light from a laser light source, and wherein both the laser light source and the 3D imaging device is pointed downwards towards the food objects.

    30. The method according to claim 22, wherein the weight correlated data comprises a weight estimate.

    31. The method according to claim 22, wherein the weight correlated data comprises a density estimate for the food object.

    32. The method according to claim 22, comprising a step of portioning a larger food object to thereby define a plurality of food object, and subsequently capturing the three-dimensional (3D) image of each food object by a 3D imaging device, and using the trained artificial neural network software module and the captured 3D image to provide a weight correlated data estimate for each food object.

    33. The method according to claim 22, comprising a step of capturing the three-dimensional (3D) image of a larger food object by a 3D imaging device, and using the trained artificial neural network software module and the captured 3D image to provide a weight correlated data estimate for the larger food object and subsequently portioning the larger food object to thereby define a plurality of food object.

    34. The method of claim 33, wherein the portioning of the larger food object is carried out with a weight consideration for each of the plurality of food objects, the weight consideration being based on the weight correlated data estimate for the larger food object.

    35. The method according to claim 22, wherein the step of training the artificial neural network software module includes the step of: portioning a larger food object to thereby define a plurality of food objects, acquiring a weight and a 3D image of each of the plurality of food objects, and associating the weight with the 3D image for each of smaller food objects in the artificial neural network software module.

    36. The method according to claim 32, wherein each of said plurality of food objects is associated with position data indicating the position of the food objects within said larger food object.

    37. The method according to claim 22, wherein the 3D image of each of the food objects is captured before said portioning is performed.

    38. The method according to claim 22, wherein the 3D image of each food object is captured after said portioning is performed.

    39. A device for providing a weight correlated data estimate for a food object, the device comprising: a 3D imaging device configured to provide three-dimensional (3D) image data of the food object, a processor configured with an artificial neural network software module configured to output the weight correlated data estimate for said food object based on the three-dimensional image data, the artificial neural network software module being trained software module, where the training of the artificial neural network software module is based on collected 3D image data with associated weight data for said similar or identical food species.

    40. The device according to claim 39, comprising a 3D imaging device positioned above the food object.

    41. The device according to claim 39, comprising only one 3D imaging device.

    42. The device according to claim 39, wherein the 3D imaging device is positioned such that an air-pocket may be shadowed by the food object, and wherein the artificial neural network is trained to identify the air-pocket and consider the air-pocket when determining a density and the weight correlated data estimate.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0030] Embodiments will be described, by way of example only, with reference to the drawings, in which

    [0031] FIG. 1 illustrates graphically a prior art method of obtaining a 3D image profile of a food object to be used as input for estimating the weight of the food object,

    [0032] FIG. 2 shows a flowchart of one embodiment of a method according to the present disclosure for estimating weights of food objects,

    [0033] FIG. 3 shows a flowchart of one embodiment of training the artificial neural network software module,

    [0034] FIGS. 4 and 5 illustrate graphically a training setup discussed in relation to the flowchart in FIG. 3,

    [0035] FIG. 6 is a flowchart of another embodiment for training the artificial neural network software module,

    [0036] FIG. 7 illustrates graphically a training setup discussed in relation to the flowchart in FIG. 6,

    [0037] FIG. 8 is a flowchart of yet another embodiment for training the artificial neural network software module,

    [0038] FIGS. 9 and 10 illustrate graphically a training setup discussed in relation to the flowchart in FIG. 8,

    [0039] FIG. 11 illustrates graphically an implementation of the method according to the present disclosure to weigh an object,

    [0040] FIG. 12 illustrates graphically an implementation of the method according to the present disclosure in cutting a food object into smaller pieces,

    [0041] FIGS. 13 and 14 show experimental results of the method according to the present disclosure.

    DESCRIPTION OF EMBODIMENTS

    [0042] FIG. 2 shows a flowchart of one embodiment of a method according to the present disclosure for estimating weights of food objects. The food objects may according to the present disclosure be understood as any type of food objects, such as but not limited to, whole fish objects or pieces of fish objects, whole meat objects such as loin or pieces of meat such as pieces of loin, whole poultry objects or pieces of poultry objects etc..

    [0043] In step (S1) 201, capturing three dimensional (3D) image data of a food object is performed by a 3D imaging device. The 3D imaging device may comprise a digital camera, a combination of a line laser pointed towards the food object and a camera, where the reflection of light from the surface of the food object is detected by the camera, and where based thereon a 3D profile is created of the food object.

    [0044] In step (S2) 202, a processor utilizes the captured 3D image data as input in an artificial neural network software module. As will be discussed in more details later, the artificial neural network software module has previously been trained for similar or identical food species as said food objects based on collected 3D image data with associated weight data for said similar or identical food species.

    [0045] In step (S3) 203, a weight correlated data estimate is outputted for said food object. The term weight correlated data may be interpreted as the actual weight estimate in grams or kilograms, or the estimate may be the density estimate.

    [0046] FIG. 3 shows a flowchart of one embodiment of training the artificial neural network software module in estimating a weight of a food object.

    [0047] In step (S1′) 301 the training includes capturing three dimensional (3D) image data of a food object by a 3D imaging device, which can be any kind of imaging device, a camera, line scanner etc..

    [0048] In step (S2′) 302 the food object is weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.

    [0049] In step (S3′), 303 the captured 3D image data and the weighing data are used as input data, training data, for an artificial neural network software module.

    [0050] Steps S1′ to S2′ are then repeated for thousands, hundreds of thousands of objects where the data is stored.

    [0051] S3′ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data. After the training, the software module can make highly accurate weight estimates.

    [0052] The method steps in the flowchart in FIG. 3 are illustrated graphically in FIGS. 4 and 5, where FIG. 4 shows where a food object 401 is e.g. being conveyed by a conveyor 402 and passes a 3D imaging device 404, 405 which may comprise a line scanner and a camera, where the camera captures the reflected light emitted by the line scanner from the object and towards the camera, and where, based thereon, a processor generates the 3D profile image. The object 401 is subsequently weighed by a weighing device 403, and these data are, as discussed, used as data for the training of the artificial neural network software module.

    [0053] FIG. 5 depicts the exact same scenario as shown in FIG. 4, but where the order is reversed, i.e. the object is first weighed, and the 3D profile image is captured subsequently.

    [0054] FIG. 6 depicts in a flowchart another embodiment of training the artificial neural network software module in estimating a weight and/or weight density distribution of an object.

    [0055] In step (S1″) 601, the training includes acquiring three dimensional (3D) image data of a food object by a 3D imaging device, which can be any kind of imaging device, a camera, a line scanner etc..

    [0056] In step (S2″) 602, the food object is cut into smaller pieces, e.g. pieces of the same thickness.

    [0057] In step (S3″), 603 the smaller pieces are weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.

    [0058] In step (S4″) 604, the captured 3D image data and the weighing data are used as input data, i.e. training data, for an artificial neural network software module.

    [0059] Steps S1″ to S3″′ are repeated for hundreds or thousands of objects and stored. Step S4″ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data. After the training, the software module can make highly accurate density distribution for such food objects, and thereby highly accurate weight estimate.

    [0060] The flowchart in FIG. 6 is illustrated graphically in FIG. 7, showing where a food object 701 is conveyed by a conveyor 702 and passes a 3D imaging device 704, 705 which may comprise a line scanner and a camera. The camera captures the reflected light emitted by the line scanner from the object and towards the camera, and based thereon, a processor generates the 3D profile image. The object 701 is subsequently cut into pieces 710 which may e.g. be done by a rotating cutting blade 711 using the 3D image as input data, where the cutting may include cutting the food object into pieces of identical thickness. Other criteria may also be implemented.

    [0061] Conveyor 709 conveys individual piece to a scale 703 where each piece 712 is weighed. Accordingly, the input data into the artificial neural network includes the 3D image of each individual piece and the associated weight. Additional input data may be position data indicating the position of the individual piece within the object 701.

    [0062] FIG. 8 depicts by a flowchart another embodiment of training the artificial neural network software module in estimating a weight and/or weight density distribution of an object.

    [0063] In step (S1″′) 801, a food object is cut into smaller pieces of e.g. the same thickness.

    [0064] In step (S2″′) 802, three dimensional (3D) image data of each of the smaller pieces is captured by a 3D imaging device, which can be any kind of imaging device, a camera, line scanner etc..

    [0065] In step (S3″′) 803, the smaller pieces are weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.

    [0066] In step (S4″′) 804, the captured 3D image data and the weighing data are used as input data, training data, for an artificial neural network software module.

    [0067] Steps S2″′ and S3″′ may just as well be reversed, i.e. S3″′ may be performed prior to step S2″′.

    [0068] Steps S1″′ to S3″′ are repeated for hundreds or thousands of objects and stored. Step S4″′ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data.

    [0069] The flowchart in FIG. 8 is illustrated graphically in FIGS. 9 and 10, showing where a food object 901 is cut into pieces 910, which may e.g. be done by a rotating cutting blade 911, where the pieces may have identical thickness (other criteria may just as well be implemented).

    [0070] In FIG. 9, the cut pieces pass a 3D imaging device 904, 905 positioned above conveyor 909, where the imaging device comprise a line scanner and a camera, where the camera captures the reflected light emitted by the line scanner from the pieces and towards the camera, and based thereon, a processor generates the 3D profile image of individual pieces. Conveyor 909, e.g. the same conveyor as 902, then conveys individual piece to a scale 903 where each piece is weighed.

    [0071] The same scenario is shown in FIG. 10 except where the weighing 903 takes place before the 3D profile image of individual pieces is generated.

    [0072] FIG. 11 illustrates graphically an implementation of the method according to the present disclosure in estimating the weight of an incoming food object 1101. As shown here, the object is conveyed by a conveyor 1102, and passes the 3D imaging device 1104, 1105, where the resulting 3D profile image is used as an input 1106 in the trained artificial neural network software module 1107 operated via a computer device 1108, where the training has been performed in line with the method discussed previously. The output 1109 is the weight estimate or the density estimate of the food object.

    [0073] FIG. 12 illustrates graphically an embodiment similar to the embodiment shown in FIG. 11 and showing where the incoming food object 1101 is conveyed by the conveyor 1102, and passes the 3D imaging device 1104, 1105. The resulting 3D profile image is used as an input 1106 in the well trained artificial neural network software module 1107 operated via the computer device 1108, where the training has been performed as discussed previously.

    [0074] In this embodiment, the output 1209 is then used to operate a cutting device 1211 to cut the food object into a plurality of pieces 1212 which may e.g. be portions of fixed weight. In this process parameters like differences in the density along the food object is taken into account.

    [0075] FIG. 13 shows experimental histogram results of around 10.000 food object samples used as training data in training the artificial neural network software module, which may also be referred to as “the model”. The distribution indicated by the dotted line are results based on weight estimate of the fish fillets using only 3D image data (prior art), where the reference is the actual weight of the fish fillets. The distribution indicated by the solid line is however the weight estimates using the trained artificial neural network software module and using the actual weight as reference. When looking at the results it is clear that the artificial neural network software module gives significant better weight predictions than in the absence of the artificial neural network software module.

    [0076] The table in FIG. 14 shows the mean and the standard deviation of the 10.000 test samples from both the artificial neural network software module, referred to as CNN model, and in the absence of the module, referred to as “Baseline”. It can be seen that the artificial neural network software module's standard deviation is a reduction of approximately 44% compared to the weighing using only the 3D profile. As is well known to a skilled person in the art, artificial neural network software module is in a way a black box, due to the very large number of parameters, which are fitted to map the relationship between the input and output. Therefore, it is hard to define, which feature in the input image that has a large meaning in relation to the output weight. However, there are some ideas of what the model is able to extract from the image. One factor, when the input data relates to fish fillets, might be a generalized fillet shape, which would reduce the effect of missing areas of information in the scan. A different factor might be that the model has generalized different densities to different areas of the fillet, which e.g. could be a higher density in the tail region, compared to the head region.

    [0077] While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the disclosure is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed disclosure, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

    [0078] The disclosure further provides the following numbered embodiments:

    [0079] 1. A method of estimating weights of food objects, comprising: [0080] capturing three dimensional (3D) image data of a food object by a 3D imaging device, utilizing, by a processor, the captured 3D image data as input in an artificial neural network software module, the artificial neural network software module previously being trained for similar or identical food species as said food objects, where the training of the artificial neural network software module is based on collected 3D image data with associated weight data for said similar or identical food species, and based thereon outputting a weight correlated data estimate for said food object.

    [0081] 2. The method according to embodiment 1, wherein the weight correlated data comprises a weight estimate.

    [0082] 3. The method according to embodiment 1, wherein the weight correlated data comprises a density estimate.

    [0083] 4. The method according to any proceeding embodiments, wherein the food object is a portion from a larger food object such that multiple of such portions define the whole larger food object.

    [0084] 5. The method according to any of the preceding embodiments, wherein the step of training the artificial neural network software module includes the step of: [0085] cutting said similar or identical food species into smaller pieces, and [0086] acquiring weight of each of the smaller pieces, and [0087] associating the weight with the volume for each of smaller pieces.

    [0088] 6. The method according to embodiment 5, wherein each of said smaller pieces is associated with a position data indicating the position of the smaller food pieces within said similar or identical food species.

    [0089] 7. The method according to any of the preceding embodiments, wherein the volume of each of the smaller pieces is determined before said cutting is performed.

    [0090] 8. The method according to any of the preceding embodiments, wherein the volume of each of the smaller pieces is captured after said cutting is performed.