Device and method for the classification of a food item of an organic or irregular structure

09977988 ยท 2018-05-22

Assignee

Inventors

Cpc classification

International classification

Abstract

An apparatus for classifying a food item of organic or irregular structure includes an image capture unit, a data input unit, and a data output unit. An evaluation unit is connected to said image capture unit, to said data input unit and to said data output unit. The image capture unit captures the food item as optical data and provides the optical data in transmittable form for transmission to the evaluation unit. The evaluation unit extracts feature values of the food item from the optical data. The feature values are combined to form a feature value tuple for the food item. The feature value tuple is automatically assignable to a feature value tuple range. The feature value tuple range is formed by one or more feature value tuples. The feature value tuple range is assigned a class. The data input unit is used to perform an assignment of the class to the feature value tuple range.

Claims

1. An apparatus for classifying a food item of organic or irregular structure, comprising: an image capture unit; a data input unit; and a data output unit; an evaluation unit connected to said image capture unit, to said data input unit and to said data output unit; said image capture unit capturing the food item as optical data and providing the optical data in transmittable form for transmission to the evaluation unit, said evaluation unit for extracting feature values of the food item from the optical data, the feature values for being combined to form a feature value tuple for the food item, the feature value tuple of the food item being automatically assignable to a feature value tuple range, the feature value tuple range being formed by one or more feature value tuples, the feature value tuple range for being assigned a class; said data input unit being used for performing an assignment of the class to the feature value tuple range; said evaluation unit being used for providing a categorization for the feature value tuple range into a core range and a marginal range, the core range having a higher probability of correct assignment of the feature value tuple of the food item to the feature value tuple range than in the marginal range; the categorization of the feature value tuple range being stipulated by a confidence threshold value, the confidence threshold value for stipulating a magnitude for the core range, and said data input unit setting the confidence threshold value under user control using a graphical slide or by inputting a specific numerical value; said data output unit outputting a recognition rate and a core range assignment rate, as dependent magnitudes of the confidence threshold value, the automatic assignment of the feature value tuple of the food item to the feature value tuple range being provided in a manner separated according to the core range and the marginal range, and the result of the automatic assignment of the feature value tuple of the food item to the feature value tuple range being output specifying the assigned class, as an assignment result, with an assignment to the core range, as a core range assignment, or with an assignment to the marginal range, as a marginal range assignment.

2. The apparatus as claimed in claim 1, wherein the core range assignment is used to provide an automatic classification.

3. The apparatus as claimed in claim 1, wherein the marginal range assignment is used to output a classification as not or not reliably able to be performed.

4. The apparatus as claimed in claim 1, wherein the automatic assignment of the feature value tuple of the food item to the feature value tuple range is monitored and correctable under user control.

5. The apparatus as claimed in claim 1, wherein, when there is a plurality of assignment results for different food items, it is possible to provide an output of the assignment result in organized fashion on the basis of a measure of confidence.

6. The apparatus as claimed in claim 1, wherein the optical data and an associated assignment result are archived.

7. The apparatus as claimed in claim 1, wherein said image capture unit is used to capture the food item as optical data on a transport system and the assignment result is provided in real time and the assignment result is used to control external units.

8. A method for classifying a food item, of organic or irregular structure, with an image capture unit, an evaluation unit, a data input unit and a data output unit, the evaluation unit being connected to the image capture unit, to the data input unit and to the data output unit, the method comprising: a) capturing the food item as optical data with the image capture unit; b) transmitting the optical data to the evaluation unit; c) extracting feature values of the food item from the optical data with the evaluation unit; d) combining the feature values of the food item for forming a feature value tuple for the food item via the evaluation unit; e) assigning a class to a feature value tuple range with the data input unit, the feature value tuple range being formed from one or more feature value tuples; f) categorizing the feature value tuple range into a core range and a marginal range by inputting a confidence threshold value, determining a magnitude of the core range by user-controlled setting of the confidence threshold value with the data input unit using a graphical slide or by inputting a specific numerical value; g) outputting a recognition rate and a core range assignment rate as dependent magnitudes of the confidence threshold value with the data output unit; h) automatically assigning the feature value tuple of the food item to the feature value tuple range, making the assignment either to the core range or to the marginal range; and i) outputting the result of the assignment of the feature value tuple of the food item to the feature value tuple range as an assignment result, specifying the assigned class with the data output unit, outputting the assignment result with an assignment to the core range, as a core range assignment, or with an assignment to the marginal range, as a marginal range assignment.

9. The method as claimed in claim 8, wherein the core range assignment prompts an automatic classification to take place.

10. The method as claimed in claim 8, wherein the marginal range assignment prompts a classification to be output as not or not reliably able to be performed.

11. The method as claimed in claim 8, further comprising, after method step i), user-controlled monitoring and performing optional correction of the automatic assignment of the feature value tuple of the food item to the feature value tuple range.

12. The method as claimed in claim 8, wherein when there is a plurality of assignment results for different food items, the assignment results are output in organized fashion, on the basis of a measure of confidence.

13. The method as claimed in claim 8, further comprising archiving the optical data and the associated assignment result.

14. The apparatus as claimed in claim 1, wherein said data output unit outputs the recognition and core range assignment rates on the basis of a graph in which the recognition rate is represented as a dependent magnitude of the core range assignment, and vice versa.

15. The apparatus as claimed in claim 1, wherein said data output unit outputs the recognition and core range assignment rates for assigning a respective confidence threshold value as an input table or selectable magnitude.

16. The apparatus as claimed in claim 1, further comprising said data output unit outputting the recognition and core range assignment rates on the basis of a graph in which the recognition rate is represented as a dependent magnitude of the core range assignment, and vice versa.

17. The method as claimed in claim 8, further comprising outputting the recognition and core range assignment rates for assigning a respective confidence threshold value as an input table or selectable magnitude.

Description

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING

(1) The invention is explained in more detail below as an exemplary embodiment of an apparatus for classifying animal carcass items with reference to:

(2) FIG. 1 a basic illustration with a photographic camera,

(3) FIG. 2 a basic illustration with a photographic and depth camera,

(4) FIG. 3 a graphical representation of core range/recognition rate on the basis of the measure of confidence,

(5) FIG. 4 a graphical representation of recognition rate on the basis of the core range assignment rate.

DESCRIPTION OF THE INVENTION

(6) An apparatus according to the invention for classifying a food item 1, in the present case as an animal carcass item, is presented as part of what is known as an identification point in the exemplary embodiment shown here. In this connection, the identification point is a station at which item-related data of the food item 1 are ascertained and provided for further processing of the food item 1.

(7) Accordingly, the identification point additionally has associated control elements for the actuation of further devices distributing or carrying out further processing on the food item 1.

(8) According to the invention, the apparatus for classifying the food item 1 has an image capture unit 2, an evaluation unit 3, a data input unit 4 and a data output unit 5.

(9) In this case, the evaluation unit 3 is connected to the image capture unit 2 by a connection 6.1, to the data input unit 4 and to the data output unit 5 by a connection 6.2.

(10) Furthermore, the image capture unit 2 shown in FIG. 1 is formed by a photographic camera 2.1, embodied as a color camera in the present case, with a photographic camera capture range 7.

(11) The data input unit 4 and the data output unit 5 are additionally formed by a respective display and are combined in one unit.

(12) The image capture unit 2 can be used to capture the food item 1 as optical data.

(13) In the present embodiment shown in FIG. 1, the optical data are discrete image points from the food item 1 that are captured by the photographic camera.

(14) Furthermore, the optical data are provided by the image capture unit 2 so as to be able to be transmitted to the evaluation unit 3.

(15) According to the invention, the evaluation unit 3 can be used to extract feature values from the optical data.

(16) By way of example, the extractable feature values are color values for the food item 1, histogram features from various color channels or edge features of the food item 1.

(17) According to the invention, the evaluation unit 3 is additionally capable of combining the extracted feature values of the food item 1 to form a feature value tuple for the food item 1.

(18) In the present exemplary embodiment, the feature values are combined using vector formation, wherein a feature value tuple is a feature value vector and hence the food item 1 is represented by this specific feature value vector.

(19) According to the invention, the data input unit 4 is used to assign a class, in the present case an animal carcass class, to a feature value tuple range. Such a feature value tuple range has been formed, according to the invention, by means of a mathematical algorithm, subsequently called a range stipulation algorithm, on the basis of at least one feature value tuple. This means that a feature value tuple range has been defined by the range stipulation algorithm, for example on the basis of a single representative feature value tuple and hence on the basis of a single food item 1 or on the basis of a plurality of representative feature value tuples and hence on the basis of a plurality of food items.

(20) In this connection, the underlying range stipulation algorithm defines the interfaces of the respective feature value tuple range with respect to the interfaces of further feature value tuple ranges.

(21) In other words, the range stipulation algorithm uses the interfaces to stipulate an envelope that defines and bounds the feature value tuple range around the at least one feature value tuple.

(22) In the present exemplary embodiment, the feature value tuple range is present in an n-dimensional feature space, wherein the feature value tuple of the food item 1 is entered in this feature space as a feature value vector.

(23) The food item 1 is therefore represented in the feature space by the entered feature value vector.

(24) Furthermore, the feature space contains further feature value tuple ranges that are defined by means of the range stipulation algorithm on the basis of representative numbers of feature value tuples of further food items in other classes.

(25) In the present case, the class is input by a user of the apparatus as an item number with an associated class name using the data input unit 4.

(26) According to the invention, it does not matter in this case whether the assignment of the class to the feature value tuple range is performed before the assignment of the respective feature value tuple or only following the assignment of the feature value tuple. For practical use, however, it will usually make sense to make the feature value tuple range nominable straight away by allocating a class name or item number.

(27) One essential feature is that the evaluation unit 3 can be used to provide categorization of the feature value tuple range into a core range and into a marginal range.

(28) In this case, the core range is the section of the feature value tuple range in which, when there are a plurality of feature value tuples for the same type of food item, there is a higher density of feature value tuples than in the marginal range, or in which the interval between the assigned feature value tuples and further feature value tuples from other feature value tuple ranges is relatively great.

(29) According to the invention, the feature value tuple range is categorized by the evaluation unit 3 on the basis of a confidence threshold value.

(30) This confidence threshold value is a stipulated value for what is known as a measure of confidence, the measure of confidence being a measure of the trustworthiness of an automatic assignment result provided by the apparatus.

(31) In the present exemplary embodiment, the measure of confidence is formed by a further mathematical algorithm, subsequently called a measure-of-confidence algorithm.

(32) Accordingly, the confidence threshold value in the present case indicates a specific value for the measure of certainty of a correct automatic assignment of the feature value tuple of the food item 1 to the corresponding feature value tuple range.

(33) According to the invention, the confidence threshold value and hence the measure of confidence can be set under user control by means of the data input unit 4.

(34) In this case, the setting of the confidence threshold value using the data input unit 4 is effected in the present case, as shown in FIG. 4, indirectly using a selectable operating point from a graphically represented function of a recognition rate on the basis of a core range assignment rate.

(35) According to the invention, the recognition rate in this case indicates the probability of a quantity of correct assignments of feature value tuples to the corresponding feature value tuple range, based on a total quantity of performed assignments.

(36) By contrast, the core range assignment rate is a measure of the magnitude of the core range, based on the feature value tuple range.

(37) As FIG. 3 shows, the recognition rate and the core range assignment rate are, according to the invention, inversely proportional to one another; this means that a small measure of confidence prompts a low recognition rate and at the same time a high core range assignment rate.

(38) If the confidence threshold value and hence the measure of confidence are chosen to be low, the result of the automatic assignment of the feature value tuple to a feature value tuple range that is performed by the apparatus according to the invention can be regarded as less trustworthy, since the core range, which is stipulated to be large, with the resultant high core range assignment rate means that the probability of incorrect assignments is comparatively high. At the same time, the apparatus would perform a large quantity of automatic assignments in this case.

(39) If, by contrast, a high confidence threshold value and hence a large measure of confidence are assessed, this results in a high recognition rate and in a low core range assignment rate.

(40) In the case of a configuration in which core range assignments are carried out as automatic assignments, this case would, in other words, involve the quantity of assignments performed automatically by the apparatus falling, but at the same time the trustworthiness of the automatically performed assignments rising, since there is only a correspondingly low core range assignment rate.

(41) This results in the technological advantage of the invention that an owner or a user of the apparatus can use the confidence threshold value to stipulate with what potential certainty of correct automatic assignments and hence how autonomously the apparatus is meant to operate. In practical handling, the stipulation can be made on the basis of a recognition rate, in which case the core range assignment rate is output as a dependent magnitude, or can be made on the basis of a core range assignment rate, in which case the recognition rate is output as a dependent magnitude.

(42) In addition, according to the invention, the evaluation unit 3 is capable of assigning the feature value tuple of the food item 1 to the feature value tuple range automatically. In the present case, the assignment is made separately according to core or marginal range, so that the feature value tuple of the food item 1 is assigned either to the core range or to the marginal range by the evaluation unit 3.

(43) According to the invention, the assignment of the feature value tuple of the food item 1 to the feature value tuple range represents termination of the classification process.

(44) According to the invention, the result of an assignment, performed by the apparatus, of the feature value tuple of the food item 1 to the respective feature value tuple range is output by the data output unit 5 as an assignment result.

(45) As a particular advantage of the invention, the assignment result is output either as an assignment to the core range or as an assignment to the marginal range, with, in a particularly advantageous variant embodiment of the invention, assignment to the core range, as core range assignment, being performed automatically by the apparatus and with assignment to the marginal range, as marginal range assignment, being output by the apparatus as not or alternatively not reliably able to be performed.

(46) In this case, a marginal range assignment results in the respective food item 1 being checked once again, preferably manually, and accordingly classified manually.

(47) In this connection, FIG. 4 shows the function of the recognition rate over the core range assignment rate on the basis of the chosen confidence threshold value.

(48) As can be seen from FIG. 4, the core range assignment rate is between 0% and 100%, depending on the chosen confidence threshold value, with a high core range assignment rate admittedly prompting a large number of automatic assignments but prompting only a low recognition rate, and hence prompting the assignment results to have only a small measure of confidence.

(49) At the same time, FIG. 4 reveals that when the core range assignment rate is falling there are fewer automatic assignments performed by the apparatus, but the recognition rate is comparatively high.

(50) Hence, the few automatic assignments performed have a large measure of confidence.

(51) In addition, FIG. 4 shows that the recognition rate will not fall to 0%, since even a core range assignment rate of 100%, albeit to some extent just at random, prompts the performance of the correct automatic assignment by the apparatus.

(52) The apparatus according to the invention particularly advantageously allows different modes of operation, which are illustrated below in the manner of a method.

(53) A first possible mode of operation is a training mode, which provides for initial startup and training of the apparatus.

(54) In the present case, this training mode relates to a preferred embodiment of the invention in which what is known as training with annotation takes place, which means that a feature value tuple range defined according to the invention is always assigned an appropriate label, in the present case a class, particularly by virtue of input by the user of the apparatus. The assignment of the class by the user can be performed either before or in parallel with the definition of the respective feature value tuple range or after definition thereof.

(55) The training mode is additionally based on the apparatus not yet having performed automatic assignment of a feature value tuple for a food item 1 to a feature value tuple range up to this time and there not yet being such a feature value tuple range up to this time.

(56) Following initial startup of the apparatus, the training mode involves the food item 1 to be classified being captured as optical data by the image capture unit 2.

(57) The optical data are then transmitted from the image capture unit 2 to the evaluation unit 3.

(58) Following reception of the optical data, the evaluation unit 3 extracts specific feature values for the food item 1 therefrom, such as color or edge values.

(59) Next, as already described above, the extracted feature values of the food item 1 are combined by the evaluation unit 3 to form a feature value tuple for the food item 1.

(60) On the basis of a representative number of feature value tuples of food items, the range stipulation algorithm is additionally used to form an associated feature value tuple range in the n-dimensional feature space with a definition of the interfaces of the feature value tuple range with respect to further feature value tuple ranges, and also the measure-of-confidence algorithm is used to form the basis for dividing the feature value tuple range into a core range and a marginal range.

(61) In the present exemplary embodiment, the feature value tuple range formed is provided with a specific class, in this case an animal carcass item class, by the user of the apparatus.

(62) In this connection, the text below considers only the specific class assigned to the feature value tuple range of the food item 1, the basis taken being able to be that all the feature value tuple ranges defined in the feature space are also each provided with a class.

(63) In the further course of the training mode, further food items are captured by the image capture unit 2 and a respective feature value tuple is formed from them, in the manner described above.

(64) Depending on the form of the newly formed feature value tuples, they are either assigned to the already existent feature value tuple range or assignment is output as not able to be performed.

(65) In the case of food items that differ from the type in question, the apparatus is also able to form new feature value tuple ranges from each of the feature value tuples that cannot be assigned. Owing to the complexity of the animal carcass items that are to be classified in the present case, there is not provision for such independent redefinition of feature value tuple ranges in the exemplary embodiment, however.

(66) In the present case, the training mode is performed in parallel with manual classification of the food item 1, the manual classification being carried out by a user of the apparatus.

(67) In this connection, although the food item 1 is classified by the apparatus or a classification is output as not able to be performed, the actual, real classification process is performed by the user of the apparatus manually, by assigning the feature value tuple of the food item 1 to the correct feature value tuple range.

(68) The training mode affords the particular advantage that, following the automatic assignment by the apparatus, the assignment result can be compared with that of the manual assignment and in this way the correctness of the automatic assignment result can be checked.

(69) This means that it is possible for an assignment that is erroneous or that is output as not able to be performed to prompt transmission of the correct assignment result by the user, in real time or at a later time, from the manual assignment to the apparatus, so that said apparatus can perform adaptation of the feature value tuple range on the basis of the corrected result and can therefore be trained.

(70) During the training mode, the manual assignment with subsequent transmission of the data back to the apparatus can be performed either at the same time as the automatic assignment or at the end of a longer-lasting assignment cycle as training at a later time.

(71) The result of the training mode is that the apparatus contains a sufficiently large data record of assignments in order to be able to ensure user-specific trustworthiness of the assignment results.

(72) A second possible mode of operation is formed in the present case by a test mode, in which the apparatus, in a user-monitored mode, admittedly performs automatic assignment of the feature value tuple of the food item 1, but the automatic assignments mean that there is not yet any actuation of devices that process the food item 1 further. In the test mode, there is, in the present case, the opportunity for the user to be able to check the automatic assignment result in real time and if need be to take corrective action.

(73) Only after the assignment result has been confirmed by the user are the assignment results used for further applications.

(74) By way of example, the confirmation by the user can be provided by virtue of the apparatus using the data output unit 4 to display the current assignment result and, if there is no reaction from the user, the assignment result being recorded as correct.

(75) If, by contrast, there is an incorrect assignment result or the assignment has been output by the apparatus as not or not reliably able to be performed, then the user has the opportunity, within a stipulated period of time, to provide or correct the assignment result by means of input on the data input unit 3.

(76) The third mode of operation is formed by the autonomous mode of operation, in which the apparatus independently performs automatic assignment of the feature value tuple of the food item 1 to the corresponding feature value tuple range in real operation and hence autonomously performs classification of the food item 1, or outputs a classification as not or not reliably able to be performed, and actuates downstream devices for further processing of the food item 1.

(77) In the case of classification of the food item 1 being performed automatically, the apparatus can in this case control a transport system 10, for example, such that the food item 1 is conveyed to an appropriate cutting or packaging device.

(78) When an assignment cannot be performed or cannot be performed correctly, it is alternatively possible for the transport system 10 to be actuated by the apparatus such that the food item 1 is supplied to a manual identification point for the purpose of further checking. The autonomous mode of operation is the preferred regulated mode.

(79) Furthermore, the apparatus can be operated in a checking mode, also called a review mode, as a further operating state. In the present case, the review mode is a particularly advantageous supplementary function of the apparatus and is performed following an assignment cycle that has already been performed, either subsequent to the training mode or subsequent to the autonomous mode of operation, for example at the end of a shift, the review mode involving the user of the apparatus performing a check on the assignment results provided.

(80) In the review mode, the user is presented both with the assignment results and, in parallel, with the graphic representations of the assigned food items, for example as an image gallery or in an assignment table.

(81) In one embodiment, the assignment results are output in this case in a manner organized according to ascending measure of confidence.

(82) The organized output of the assignment results then affords the particular advantage that the assignment results having a small measure of confidence and hence the less trustworthy assignment results are output first, these having a higher probability of misassignments, which can also be called misclassifications, or in other words a lower recognition rate.

(83) After a check on the quantity of assignment results with a small measure of confidence, there then follow only assignment results having a large measure of confidence, which means that it can be assumed that these assignment results are potentially correct.

(84) Hence, not all the assignment results need to be checked again within the review mode, which means that the involvement of personnel and time for checking can be kept particularly low.

(85) As a result of the review mode, the number of misclassifications and classifications that are output as not able to be performed can additionally be output by the data output unit 5, so that the user of the apparatus can decide whether the apparatus can ensure a sufficiently high probability of correct assignments or whether further training data are also necessary. The user is therefore simultaneously provided with the information concerning what recognition rate has what core range assignment rate and how the setting can be optimized in accordance with user preferences.

(86) In this connection, the apparatus according to the invention provides, in a preferred variant, the option for the captured optical data of the food item 1 and the associated assignment result to be archivable.

(87) This advantageously allows, for example in the case of later questions of liability, the assignment results and the probability of correct automatic assignments to be reproducible on the basis of the respectively chosen confidence threshold value.

(88) A particularly advantageous development of the invention as shown in FIG. 2 provides for the image capture unit 2 to be formed both from the photographic camera 2.1 and from a depth camera 2.2 with a depth camera capture range 8.

(89) According to the invention, the depth camera 2.2 is arranged such that the photographic camera capture range 7 and the depth camera capture range 8 overlap in a common capture range 9, the section of the food item 1 that is relevant to the classification being captured in the common capture range 9.

(90) In this case, the additionally provided depth camera 2.2 affords the particular technological advantage that besides the color values it is also possible for depth values from the food item 1, as occur for an abdominal cavity of a half animal carcass, for example, to be captured.

(91) Therefore, additional feature values of the food item 1 can be captured and, on the basis thereof, a more comprehensive feature value tuple for the food item 1 can be provided.

(92) As shown in FIGS. 1 and 2, a further preferred embodiment of the invention provides for the image capture unit 2 to be able to be used to capture the food item 1 as optical data on a transport system 10 the respective assignment result to be able to be provided in real time.

(93) In a further embodiment, the image capture unit 2, on the one hand, and the data input unit 4 and also the data output unit 5 are arranged so as to be physically separate from one another. This allows the identification point to be monitored and controlled by a central control station, which may also be on different business premises. In a variation, the data input unit 4 and also the data output unit 5 are designed to be able to be changed over in terms of equipment, so that monitoring and control can be changed over, for example, between direct (local) on the transport system 10 or at a central control station (remote) or multiple locations simultaneously.

(94) In a related embodiment, the identification point is operated in a plurality of parallel lines. This means that a plurality of evaluation units 3 and data input units 4 and data output units 5 that are respectively associated with image capture units 2 are present, so that a plurality of food items can be captured in parallel. In this case, the exemplary embodiment involves the plurality of evaluation units 3, data input units 4 and data input units 5 being combined in terms of equipment, so that just one operator can monitor a plurality of parallel captures and assignments from a central control station, which allows the use of personnel to be reduced further and effectiveness to be significantly increased. This makes use of the particular advantage of the invention that only the food items that are associated with the marginal range are transferred to the operator for a visual/human decision about the assignment. The setting of the confidence threshold value can be used to set the number of assignment decisions, accumulating for a plurality of lines that need to be made by the operators to a measure that is expedient in terms of work physiology and business management.

REFERENCE SYMBOLS USED

(95) 1 Food item 2 Image capture unit 2.1 Photographic camera 2.2 Depth camera 3 Evaluation unit 4 Data input unit 5 Data output unit 6 Connection 7 Photographic camera capture range 8 Depth camera capture range 9 Common capture range of photographic camera/depth camera 10 Transport system