Recognition method and recognition system for unambiguously recognizing an object

20220044080 · 2022-02-10

    Inventors

    Cpc classification

    International classification

    Abstract

    The presented invention relates to a computer-implemented recognition method (100) for unambiguously recognizing an object. The recognition method (100) comprises a first determining step (101) for determining, by means of a first optical sensor (201) at a first point in time, reference information by capturing a number of symbols applied to a reference object, a training step (103) for training a machine learner on the basis of the reference information and a provided ground truth which assigns respective reference information to a first class or a further class, a second determining step (105) for determining, by means of a second optical sensor (205) at a second point in time, sample information by capturing a number of symbols applied to a sample object, an assigning step (107) for the assigning of the sample information to the first class or the further class by the machine learner, and an outputting step (109) for outputting a validation signal in case the machine learner assigns the sample information to the first class.

    Furthermore, the presented invention relates to a recognition system (200).

    Claims

    1-11. (canceled)

    12. A computer-implemented recognition method for unambiguously recognizing an object, comprising: determining, by means of a first optical sensor at a first point in time, reference information of at least one reference object by capturing a number of symbols applied to a respective reference object; training a machine learner on the basis of the reference information and a provided ground truth which assigns respective reference information to a first class or a further class; determining, by means of a second optical sensor at a second point in time, sample information by capturing a number of symbols applied to the object to be recognized; assigning the sample information to the first class or to the further class by the machine learner; and outputting a validation signal in case the machine learner assigns the sample information to the first class.

    13. The recognition method according to claim 12, wherein the first sensor and the second sensor are identical.

    14. The recognition method according to claim 12, wherein the second sensor is formed as an integral part of a mobile computing unit.

    15. The recognition method according to claim 12, wherein the ground truth is dynamically updated and the machine learner is dynamically retrained.

    16. The recognition method according to claim 12, wherein the recognition method comprises a preprocessing step for preprocessing the reference information and/or the sample information before it is supplied to the machine learner, and wherein the preprocessing step comprises at least one process of the following list of processes: distinguishing between symbol information and background information by means of a symbol recognition algorithm; recognizing individual symbols using a symbol recognition algorithm; converting image information into spectral information by means of a Fourier transformation.

    17. The recognition method according to claim 12, wherein the reference information is determined by an entity classified as trustworthy in a manufacturing process of the respective reference object and the sample information is determined by an entity not classified as trustworthy outside a manufacturing process of the object to be recognized.

    18. The recognition method according to claim 12, wherein the recognition method comprises a verification step in which respective first sample information provided by a specific user is evaluated by means of a further machine learner, wherein the further machine learner is trained on the basis of second sample information already provided by the user in the past and assigns the first sample information to the first class or the further class.

    19. The recognition method according to claim 12, wherein the ground truth assigns reference information showing a deviation of respective, in particular applied or printed reference symbols from an ideal symbol, which is greater than a validation threshold, to the class “not valid”, and assigns reference information showing a deviation of respective, in particular printed reference symbols from an ideal symbol, which is less than or equal to the validation threshold, to the first class.

    20. The recognition method according to claim 12, wherein the machine learner is trained on the basis of reference information comprising specific features corresponding to deviations of at least one of the applied symbols from at least one ideal symbol.

    21. The recognition method according to claim 20, wherein the specific features are selected from: deviations in symbols applied to a respective reference object from respective ideal symbols, in particular differences in dots, omissions, smudges, chipping, embossing, abrasion, in particular in the case of printed, stamped, punched, lasered, engraved or embossed characters, graphic symbols and/or codes, and/or positional deviations of symbols applied to a respective reference object from respective ideal symbols, in particular labels, shrink films and/or imprints in relation to a reference point on the reference object; and/or positional deviations of symbols applied to a respective reference object from markings or from object components, in particular from corners, edges, closures and/or seams of the reference object; and/or deviations in reflections, light and/or shadow cast, in particular a shadow cast of a deformable packaging as reference object; and/or deviations of corners, edges, seams, welds, embossing, curvatures, folds, soiling, repulsions and/or deformations of a respective reference object; and/or color deviations of a respective reference object; and/or deviations of contents of a transparent or translucent object as reference object; and/or deviations in a printing substrate structure, label, film and/or product wrapping of a respective reference object.

    22. A recognition system for unambiguously recognizing an object, comprising: a first optical sensor configured to determine, at a first point in time, reference information of at least one reference object by capturing a number of symbols applied to a respective reference object; a training module configured to train a machine learner on the basis of the reference information and a provided ground truth which assigns respective reference information to a first class or a further class; a second optical sensor configured to determine, at a second point in time, sample information by capturing an object to be recognized, wherein the object to be recognized has a number of sample symbols, in particular printed sample symbols; a classification module configured to assign the sample information to the first class or the further class by means of the machine learner; and an output unit for outputting a validation signal in case the machine learner assigns the sample information to the first class.

    Description

    [0056] FIG. 1 shows a possible embodiment of the recognition method according to the invention,

    [0057] FIG. 2 shows a possible embodiment of the presented recognition system.

    [0058] FIG. 1 illustrates a computer-implemented recognition method 100 for unambiguously recognizing an object.

    [0059] The recognition method 100 comprises a first determining step 101 for determining, by means of a first optical sensor at a first point in time, reference information by capturing a number of symbols applied to a reference object.

    [0060] The first determining step 101 can be performed, for example, by a manufacturer of a reference object during a manufacturing process, for example, using a stationary image capture system. In particular, the reference object can be captured several times during the first determining step, for example, from different recording angles and/or under different lighting conditions.

    [0061] Alternatively, the first determining step can be performed by a user of a reference object during the first point in time, for example, using a smartphone or an image capture sensor of a smartphone. Instructions can be issued to the user to guide the user in creating different images, for example, from different recording angles and/or under different lighting conditions. Optionally, a user may be directed to determine a video, i.e. a number of contiguous images, of the reference object as reference information. To guide the user, instructions can be issued to the user on a smartphone, for example, acoustically and/or visually.

    [0062] Furthermore, the recognition method 100 comprises a training step 103 for training a machine learner with reference information and a provided ground truth which assigns respective reference information to a first class, for example, a “valid” class, or to a further class, for example, a “not valid” class.

    [0063] In particular, the training step 103 may be carried out on a computing unit, such as a server, that is communicatively connected to a sensor for performing the first determining step 101. Accordingly, the computing unit receives sensor data, in particular image data, from the first optical sensor and provides it to the machine learner.

    [0064] Using a ground truth, the machine learner is trained by means of the sensor data determined in the first determining step 101 as acquired reference information, so that there is a trained machine learner after the training step 103. In particular, reference information determined in the first determining step 101 can be split into two parts which are assigned to a first class or a second or further class by means of the ground truth, so that the machine learner can be trained using only the reference information captured by the sensor. Alternatively or additionally, the machine learner can be trained on the basis of specified reference information, for example, reference information artificially generated or generated by manual manipulations of reference objects.

    [0065] The ground truth may be provided by a user or manufacturer and may comprise examples or define a classification of when reference information is to be assigned to the first class or the further class.

    [0066] Furthermore, the recognition method 100 comprises a second determining step 105 for determining, by means of a second optical sensor at a second point in time, sample information by capturing a number of symbols applied to an object to be recognized, i.e., a sample object.

    [0067] In the second determining step 105, the second optical sensor is used to determine sample information from the object to be recognized. For this purpose, the object to be recognized can be captured by a user using an image capture sensor of a smartphone, for example.

    [0068] Furthermore, the recognition method 100 comprises an assigning step 107 for the assignment of the sample information to the first class or to the further class by the machine learner.

    [0069] Once the machine learner is trained, it can carry out an assignment of sample information to the first class or the further class based on its trained logic.

    [0070] Furthermore, the recognition method 100 comprises an outputting step 109 for outputting a validation signal in case the machine learner assigns the sample information to the first class.

    [0071] To inform a user of a result of the assigning step 107 or to enable further systems, such as a third-party system, to automatically process the result of the assigning step 107, the result can be output by means of an output unit, such as a display or a communication interface for transmitting result data to a third-party system.

    [0072] FIG. 2 illustrates a recognition system 200.

    [0073] The recognition system 200 comprises a first optical sensor 201 configured to determine, at a first point in time, reference information by capturing a number of symbols applied to a reference object, a training module 203 configured to train a machine learner on the basis of the reference information and a provided ground truth which assigns the respective reference information to a first class or a further class, a second optical sensor 205 configured to determine sample information by capturing an object to be recognized, i.e. a sample object, at a second point in time, wherein the sample object has a number of symbols, in particular printed symbols, a classification module 207 configured to assign the sample information to the first class or the further class by means of the machine learner, and an output unit 209 for outputting a validation signal in case the machine learner assigns the sample information to the first class. In further splitting, an invalidation signal is output in case the machine learner assigns the sample information to the further class.

    [0074] The classification module 207 can be configured, for example, as a processor or subprocessor of a computer system, in particular a smartphone.

    [0075] The first optical sensor 201 can be, for example, a smartphone or a camera in a production line.

    [0076] The second optical sensor 205 can be, for example, a smartphone.