OBJECT AUTHENTICATION USING DIGITAL BLUEPRINTS AND PHYSICAL FINGERPRINTS

20220398348 · 2022-12-15

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of object authentication based on digital blueprints and physical fingerprints comprising the steps of acquiring a set of training blueprints and fingerprints, training, object enrollment and object authentication. The method uses a pair of a mapper realized as an encoder and a decoder and a set of multi-metric scores originating from the decomposition of mutual information and applied to both the output of the encoder and decoder and producing a feature vector for a one-class classifier. The method is trained only on the original physical objects and does not use any fakes for reliable authentication.

    Claims

    1. A method for authentication of physical objects, in particular of physical batch objects, adapted to be produced based on a set of blueprints {t.sub.i}.sub.i=1.sup.N comprising, at a training stage, the steps of: acquiring a set of training signals representing fingerprints {x.sub.i}.sub.i=1.sup.N of said physical objects; providing a mapper adapted to perform an operation on said fingerprints {x.sub.i}.sub.i=1.sup.N and a classifier adapted to be applied to at least one similarity score; training said mapper and/or said classifier on said set of training signals {x.sub.i}.sub.i=1.sup.N to obtain a learned mapper and/or a learned classifier; enrolling objects to be protected; and at an authentication stage, the steps of: acquiring a probe signal y from a physical object to be authenticated; applying the mapper and classifier, at least one of which is learned, to the probe signal y; producing a decision about authenticity of the physical object represented by the probe signal y based on output of the classifier, wherein said mapper is adapted to produce an estimate t.sub.i of blueprint t.sub.i based on fingerprint x.sub.i and said classifier is a one-class classifier, and the method further comprises, at said training stage, the steps of: acquiring said set of blueprints {t.sub.i}.sub.i=1.sup.N comprising at least one blueprint t.sub.i, providing a set of similarity scores for said blueprints; performing a joint training of said mapper and one-class classifier on said set of training signals {x.sub.i}.sub.i=1.sup.N and said set of blueprints {t.sub.i}.sub.i=1.sup.N to obtain a jointly learned mapper and/or a jointly learned one-class classifier; and at said authentication stage, the method further comprises the steps of: applying jointly the mapper and one-class classifier, at least one of which is learned, to the probe signal y; and said output of the one-class classifier allowing a decision about authenticity of the object represented by the probe signal y, being produced based on fingerprints x.sub.i and/or blueprints t.sub.i.

    2. The method according to the claim 1, wherein: said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N represent paired data, said mapper is realized by a hand-crafted mapper or by a learnable mapper, and said output of the one-class classifier allowing a decision about authenticity of the object represented by the probe signal y, being produced based on blueprints t.sub.i.

    3. The method according to claim 1, further comprising, at said training stage, the step of providing a set of similarity scores for said fingerprints, and wherein: said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N represent paired data, said mapper is realized by a hand-crafted mapper or by a learnable mapper, and said output of the one-class classifier allowing a decision about authenticity of the object represented by the probe signal y, being produced based on fingerprints x.sub.i and blueprints t.sub.i.

    4. The method according to claim 1, wherein said mapper is realized by an encoder adapted to produce an estimate {tilde over (t)}.sub.1 of blueprint t.sub.i based on fingerprint x.sub.i and the method further comprises, at said training stage, the steps of: providing, next to said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N representing a set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J representing unpaired data; providing a decoder adapted to produce an estimate {circumflex over (x)}.sub.l of fingerprint x.sub.i based on blueprint t.sub.i, a set of similarity scores for said fingerprints, fingerprint discriminators and blueprint discriminators; performing a joint training of said encoder, decoder, fingerprint discriminators and blueprint discriminators and one-class classifier on said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N and/or said unpaired blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J to obtain a jointly learned encoder, decoder, fingerprint and blueprint discriminators and one-class classifier, with said encoder and decoder being trained in a direct way x.fwdarw.{tilde over (t)}.fwdarw.{circumflex over (x)} from fingerprints x.sub.i of said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N; and at said authentication stage, the steps of: applying jointly the learned encoder, decoder, fingerprint discriminators and blueprint discriminators and one-class classifier to the probe signal y; said output of the one-class classifier, allowing a decision about authenticity of the object represented by the probe signal y, being produced based on blueprints t.sub.i.

    5. The method of claim 1, wherein said mapper is realized by an encoder adapted to produce an estimate {circumflex over (t)}.sub.l of blueprint t.sub.i based on fingerprint {tilde over (x)}.sub.l and the method further comprises, at said training stage, the steps of providing, next to said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N representing a set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J representing unpaired data, providing a decoder adapted to produce an estimate {tilde over (x)}.sub.l of fingerprint x.sub.i based on blueprint t.sub.i, a set of similarity scores for said fingerprints, fingerprint discriminators and blueprint discriminators, performing a joint training of said encoder, decoder, fingerprint discriminators and blueprint discriminators and one-class classifier on said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N and/or said unpaired blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J to obtain a jointly learned encoder, decoder, fingerprint and blueprint discriminators and one-class classifier, with said encoder and decoder being trained in a reverse way t.fwdarw.{tilde over (x)}.fwdarw.{circumflex over (t)} from blueprints t.sub.i of said set of pairs of blueprints and fingerprints {t.sub.i i, x.sub.i}.sub.i=1.sup.N, and at said authentication stage, the steps of: applying jointly the learned decoder, fingerprint discriminators and one-class classifier to the probe signal y, said output of the one-class classifier, allowing a decision about authenticity of the object represented by the probe signal y, being produced based on blueprints t.sub.i.

    6. The method according to claim 1, wherein said mapper is realized by an encoder adapted to produce an estimate {tilde over (t)}.sub.1 of blueprint t.sub.i based on fingerprint x.sub.i and the method further comprises, at said training stage, the steps of: providing, next to said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N representing a set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J representing unpaired data, providing a decoder adapted to produce an estimate {tilde over (x)}.sub.l of fingerprint x.sub.i based on blueprint t.sub.i, a set of similarity scores for said fingerprints, fingerprint discriminators and blueprint discriminators, performing a joint training of said encoder, decoder, fingerprint discriminators and blueprint discriminators and one-class classifier on said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N and/or said unpaired blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J to obtain a jointly learned encoder, decoder, fingerprint and blueprint discriminators and one-class classifier, with said encoder and decoder being trained in two ways combining a direct way x.fwdarw.{tilde over (t)}.fwdarw.{circumflex over (x)} from fingerprints x.sub.i of said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N and a reverse way t.fwdarw.{tilde over (x)}.fwdarw.{circumflex over (t)} from blueprints t.sub.i of said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, and at said authentication stage, the steps of: applying jointly the learned encoder, decoder, fingerprint discriminators, and blueprint discriminators and one-class classifier to the probe signal y, said output of the one-class classifier, allowing a decision about authenticity of the object represented by the probe signal y, being produced based on fingerprints x.sub.i and/or blueprints t.sub.i.

    7. The method according to claim 1, wherein said authentication is performed only from the blueprint t.sub.i or only from the fingerprint x.sub.i or jointly from the blueprint-fingerprint pair (t.sub.i, x.sub.i) representing an object with the index i.

    8. The method according to claim 1, wherein said sets of similarity scores for said fingerprints and/or for said blueprints are each sets of multi-metric similarity scores measuring similarity and comprising each at least two metrics chosen from the group comprising Euclidean l.sub.2-norm, l.sub.1-norm or generally l.sub.p-norm, Pearson correlation, Hamming distance, moment matching and embedded space distances.

    9. The method according to claim 1, wherein output of said multi-metric similarity scores is concatenated into a feature vector and serves as an input to said one-class classifier which is implemented as a kernel one-class SVM, deep classifier or as a hand-crafted decision rule bounding said set of multi-metric similarity scores from those of hypothetical fakes.

    10. The method according to claim 4, wherein said discriminators are realized as criteria implementing density ration estimation, MMD, Wasserstein or generalized divergences, U-net with a latent and output space used as a global and local discrimination, the discriminators being applied to the entire data or only to its parts having further aggregation of scores of local discriminators.

    11. The method according to claim 4, wherein said encoder and decoder are represented as deep neural networks and implemented based on U-net architecture, several down sampling convolutional layers followed by several ResNet layers and several up sampling convolutional layers, invertable networks such as normalizing FLOWS or similar ones and also by injecting the randomization.

    12. The method according to claim 4, wherein said encoder, decoder and discriminators are trained from target samples x.sub.1, . . . , x.sub.K representing examples of printing and acquisition or fakes.

    13. The method according to claim 4, wherein said encoder, decoder and discriminators are trained from parameters defining printer settings encoded by the encoder and phone settings encoded by the encoder.

    14. The method according to claim 4, wherein said encoder and decoder are trained on paired data of blueprints-fingerprint {t.sub.i,x.sub.i}.sub.i=1.sup.N with similarity scores for intra-modal encodings of fingerprint-to-blueprint and blueprint-to-fingerprint and on unpaired data of blueprints {t.sub.i}.sub.i=1.sup.N and fingerprint with discriminators estimating the proximity of synthetic data to real data of corresponding modalities.

    15. The method according to claim 4, wherein said learned encoder and decoder are used to produce synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N in a controlled proximity to the manifold of original fingerprints {x.sub.i}.sub.i=1.sup.N from said set of blueprints {t.sub.i}.sub.i=1.sup.N, the synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N being used to train the classifier.

    16. The method according to claim 15, wherein said synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N are within boundaries of the manifold of original fingerprints {x.sub.i}.sub.i=1.sup.N and said one-class classifier is trained on an augmented training set comprising both original fingerprints {x.sub.i}.sub.i=1.sup.N and synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N, or the synthetic samples {x}.sub.i=1.sup.N are in close proximity to the manifold of original fingerprints {x.sub.i}.sub.i=1.sup.N and the synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N are considered as the worst case fakes to original fingerprints {x.sub.i}.sub.i=1.sup.N and the classifier is trained in a supervised way.

    17. The method according to claim 15, wherein said synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N generated in close proximity to the manifold of original fingerprints {x.sub.i}.sub.i=1.sup.N are considered as synthetic fakes representing non-authentic samples and the encoder and decoder are trained to maximize the mutual information for the encoder path and encoder-decoder path encoding the original blueprints and fingerprints while performing the corresponding minimization of mutual information for the same encoder-decoder while operating on synthetic fakes.

    18. The method according to claim 1, wherein said blueprints {t.sub.i}.sub.i=1.sup.N are secured and kept secret by an adequate securing process at the stage of generating, providing or acquiring the blueprints {t.sub.i}.sub.i=1.sup.N, the securing process being chosen from the group comprising of modulation by use of a secret key k, modulation by use of a secret mapping, modulation by use of a space of secret carriers of a transform domain, such as to produce modulated blueprints {t.sub.i.sup.s}.sub.i=1.sup.N.

    19. The method according to claim 1, wherein said blueprints {t.sub.i}.sub.i=1.sup.N are provided by a party manufacturing and/or commercializing the objects to be authenticated and/or are acquired by a party supposed to provide for the authentication, in particular by examining the objects to be authenticated in a non-invasive manner.

    20. A use of a method according to claim 1 for an application chosen from the group comprising protection of documents, certificates, contracts, identity documents, packaging, secure labels, stickers, banknotes, checks, luxury goods such as watches, precious stones and metals, diamonds, electronics, chips, holograms, medicaments, cosmetics, food, spare and medical parts and components and providing the related object tracking and tracing, circulation and delivery monitoring and connectivity to the corresponding database records including blockchain.

    21. A use of a method according to claim 1 for an application chosen from the group comprising generation of synthetic samples of printed objects acquired under various imaging conditions and simulation of various distortions in imaging systems that include the noise in RAW images under various settings of imaging device, geometric aberrations and defocusing.

    22. A use of a method according to claim 1 for authentication of a person from a probe signal y with respect to a reference blueprint t.sub.i reproduced on an identification document of said person or stored in some electronic form in private or public domain and a fingerprint x.sub.i representing reference biometric data of said person.

    23. A use of a method according to claim 1 for an application chosen from the group comprising automatic quality estimation of newly produced objects and anomaly detection during manufacturing.

    24. A device adapted for the implementation of the method according to claim 1, wherein the device is chosen from the group comprising a mobile phone, a smart phone equipped with a camera, a digital photo apparatus, a barcode reader equipped with a camera, a scanning device, a portable microscope connected to any portable device equipped with communication capability.

    25. A tangible computer-readable medium storing instructions that, when executed by a computer, cause it to implement the method according to claim 1.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0081] The attached figures exemplary and schematically illustrate the principles as well as several embodiments of the present invention.

    [0082] FIG. 1 presents a generic architecture of an object authentication method comprising use of a digital blueprint t.sub.i used to produce physical batch objects, an imaging system allowing acquisition of an image from a given physical object, a counterfeiting process producing a counterfeited object and imaging of this counterfeited object.

    [0083] FIG. 2 illustrates the authentication process based on two possible approaches when either the fingerprint x.sub.i (FIG. 2A) or the blueprint t.sub.i (FIG. 2B) of the original object are available at the test stage. It is also assumed that the fake samples are available for training in both cases.

    [0084] FIG. 3 schematically illustrates a generalized authentication architecture according to the present invention of a physical object based on a blueprint and an acquired image, referred to in the following also as a probe. The system is trained without fake samples.

    [0085] FIG. 4 schematically illustrates a generalized architecture for authentication according to the present invention of a physical object based on a probe, the object's blueprint and an enrolled fingerprint of said physical object.

    [0086] FIG. 5 exemplifies samples of the authentication test based on anti-copy patterns. The original blueprint is represented by a randomized or encoded pattern as shown in FIG. 5A and the scanned original print representing the fingerprint x.sub.i and produced fakes fs are exemplified in FIGS. 5B to 5F, respectively.

    [0087] FIG. 6 schematically illustrates the inability of classical metrics used for the authentication architectures presented in FIGS. 3 and 4 for a fixed non-trainable mapper.

    [0088] FIG. 7 further extends the experimental study of the setup presented in FIG. 6 to the case where the physical reference based on the fingerprint x.sub.i was used instead of the blueprint t.sub.i for a fixed mapper.

    [0089] FIG. 8 illustrates the decision boundary of the OC-classifier based on the RBF-SVM trained only on the original data for the setup considered in FIG. 6.

    [0090] FIG. 9 schematically illustrates a generalized principle of training in a direct path system for physical object authentication according to the present invention based on N pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, and J unpaired data samples of blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J.

    [0091] FIG. 10 schematically illustrates a generalized principle of a testing or authentication stage in the direct path system for authentication of a physical object i according to the present invention based on the probe y and blueprint t.sub.i of the object under the authentication.

    [0092] FIG. 11 schematically illustrates a reverse path authentication system according to the present invention comprising an encoder-decoder pair trained based on both paired and unpaired data.

    [0093] FIG. 12 schematically illustrates a generalized principle of a testing or authentication stage in the reverse system according to the present invention.

    [0094] FIG. 13 schematically illustrates a generalized principle of training in a two-way approach for physical object authentication according to the present invention based on N pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, and J unpaired data samples of {t.sub.j}.sub.j=1.sup.J and {x.sub.j}.sub.j=1.sup.J.

    [0095] FIG. 14 schematically illustrates a testing or authentication stage based on the two-way trained system according to FIG. 13.

    [0096] FIG. 15 schematically illustrates a reverse system training based on unpaired blueprint t and several fingerprints x.sub.1, . . . , x.sub.K defining a targeted fingerprint {tilde over (x)} which should represent the input blueprint ton the output of the decoder (210).

    [0097] FIG. 16 schematically illustrates a reverse system training based on a blueprint t and a set of parameters determining the appearance of a targeted fingerprint {tilde over (x)} according to printer settings (264) and phone settings (265).

    [0098] FIG. 17 presents a 3D visualization of class separability for the reverse system according to the present invention in a multi-metric space.

    [0099] FIG. 18 shows a t-SNE computed from a concatenation of four multi-metrics.

    [0100] FIG. 19 presents an example of synthetic generation of fingerprints from given blueprints for various parameters controlling the proximity to the original data.

    [0101] FIG. 20 illustrates the generation of fakes of type 1 from digital blueprints.

    [0102] FIG. 21 shows the same results for fakes of type 3 that have different appearance as compared to fakes of type 1.

    [0103] FIGS. 22A to 22C illustrate various augmentation strategies for the synthetic sample generation by the system according to the present invention.

    [0104] FIG. 23 schematically illustrates the training step of a system according to the present invention on original and fakes samples.

    DETAILED DESCRIPTION OF THE INVENTION

    [0105] In the following, the invention shall be described in detail with reference to the above mentioned figures.

    [0106] The present invention relates to a method of physical object authentication based on blueprints and/or fingerprints. As already mentioned before, the following description, in general, will concentrate on the consideration of the method according to the present invention when used for the above mentioned authentication problem under the lack of training samples representing a class of fakes, and will only highlight and exemplify the differences and extensions to the classification in the presence of the synthetic fakes. However, the method according to the present invention may also comprise a training step using training samples representing a class of fakes, such that this conventional use case, of course, is possible also within the proposed method, though not described in detail.

    [0107] According to the present invention, the authentication method typically comprises a mapper which is preferably realized by an encoder, optionally a decoder, multi-metrics and a classifier, which are trained in a way described in the following to achieve accurate and efficient authentication. Accordingly, the method according to the present invention comprises two main stages: A first stage includes joint training of the mapper, respectively of the encoder-decoder pair, according to the specified objectives to produce a vector of multi-metric features used for the classifier, which is trained on these features. The training stage also includes enrollment of new objects. A second stage includes an authentication step of the enrolled objects, represented by their blueprints t.sub.i and/or fingerprints x.sub.i. Said authentication step forms a testing stage comprising production of a decision about the authenticity of an object to be authenticated and represented by a probe signal y acquired from this object using some imaging device with respect to a reference blueprint t.sub.i, a reference fingerprint x.sub.i and/or jointly a blueprint-fingerprint pair (t.sub.i,x.sub.i) representing an enrolled authentic physical object.

    [0108] The blueprint t.sub.i may, for example, be provided by the party manufacturing and/or commercialising the objects to be authenticated or, alternatively, may also be acquired by the party supposed to provide for or to perform the authentication, possibly by examining the objects to be authenticated and preferably in non-invasive manner, e.g. by determining/measuring the parameters/features/dimensions of original objects to be authenticated without disassembling these objects. The fingerprint x.sub.i, like mentioned already in the introduction, typically represents individual marking means which are voluntarily introduced and/or are inadvertantly present in the objects to be authenticated.

    [0109] FIG. 1 illustrates a general setup representing the operational scenario under consideration in the present invention. The generic architecture shown corresponds to an object authentication method comprising use of a digital blueprint or template t.sub.i which is used to produce physical (batch) objects o.sub.i,m based on some production system P.sub.m, 1, . . . , M, an imaging system characterized by the parameters i.sub.a, a=1, . . . , A and allowing acquisition of an image x.sub.i,m,a from a given physical object o.sub.i,m, a counterfeiting process with the parameters c.sub.m, 1, . . . , C producing a counterfeited object o.sub.i,m.sup.(c) and imaging of this counterfeited object resulting into the image f.sub.i,m,a.sup.(c) of said fake/counterfeited object o.sub.i,m.sup.(c). The blueprint t.sub.i is linked with the fingerprints x.sub.i,m,a that are parameterized by the parameters m of manufacturing/printing equipment and the parameters a of acquisition equipment. The parameters m of the manufacturing/printing equipment at least partly reflect the above mentioned individual marking means which are voluntarily introduced and/or are inadvertantly present in the objects to be authenticated, whilst the parameters a of the acquisition equipment reflect modulation of these individual marking means and thus of the link between the blueprint t.sub.i and the fingerprints x.sub.i,m,a during acquisition of information from the objects to be authenticated. The fakes are represented by f.sub.i,m,a.sup.c, where c denotes the parameters of applied faking technology.

    [0110] In a general case, the authentication of a physical object can be performed based on the verification of correspondence between the features of original objects and those of fakes ones. Assuming a special case where the original objects are represented by the fingerprints {x.sub.i}.sub.i=1.sup.N and the fakes by the fingerprints {f.sub.i}.sub.i=1.sup.N for a particular application of interest with the fixed parameters, one can train a supervised classifier to address the authentication problem. It is important to note that: (a) the class of originals and the class of fake objects are represented by the corresponding sets and there is no direct correspondence between a pair x.sub.i and f.sub.i, i. e. the classifier does not use the information from which fingerprint x.sub.i the fake f.sub.i is produced from and (b) the classification is not based on a verification of proximity of the probe y to a particular claimed object with the index i represented by its fingerprint x.sub.i and blueprint t.sub.i. At the authentication stage, the trained classifier should decide whether the probe y is closer to the class of the originals or fakes. This sort of authentication might be considered as a sort of generic forensic verification as considered in [63].

    [0111] Furthermore, in case the defender that trains the classifier knows the correspondence between the triplets {x.sub.i,x′.sub.if.sub.i}.sub.i=1.sup.N, the authentication process can be organized as shown in FIG. 2A. FIG. 2A illustrates the authentication based on the fingerprint approach. The training stage of the authentication system is based on the available triplets {x.sub.i,x′.sub.if.sub.i}.sub.i=1.sup.N of the pair of original fingerprints x.sub.i, x′.sub.i acquired from the authentic object with the index i and corresponding fingerprints f.sub.i extracted from the fake objects imitating the object with the same index. Once the system is trained, the authentication can be applied to new fingerprints of enrolled objects that were not available at the training stage. The training is based on the minimization of a distance d(a.sub.x.sub.i,a′.sub.x.sub.i) between the embeddings a.sub.x.sub.i=g.sub.θ.sub.x(x.sub.i) and a′.sub.x.sub.i=g.sub.θ.sub.x(x′.sub.i) representing the mapping of fingerprint x.sub.i and its second view fingerprint x′.sub.i via the embedding operator g.sub.θ.sub.x(.Math.) (100). Simultaneously, the system maximizes the distance to the projected representation of corresponding fake f.sub.i, i.e., d(a.sub.x.sub.i,a.sub.f.sub.i). The trained embedding is used for the training of the classifier. Alternatively the minimization/maximization of distances can be extended to the maximization/minimization of corresponding mutual information such as will be explained in more detail in the following description. This system resembles a triplet loss architecture [86] that is well known for example in biometrics applications and content retrieval systems. In addition, the training can also incorporate not only pairwise distances between the pairs of fingerprints representing original and fakes but also more general cases, when the distance between a particular fingerprint and several examples of fakes are used or between a particular fingerprint and several fakes corresponding to other fingerprints close to x.sub.i. The examples of training strategies of such kind of systems can be based on multi-class N-pair loss [87] or NT-Xent loss [88] also extended to the supervised case [89].

    [0112] Once the embedding operator g.sub.θ.sub.x(.Math.) is trained, which is denoted in FIG. 2A as g*.sub.θ.sub.x(.Math.) one can project the original fingerprints and fakes via g*.sub.θ.sub.x(.Math.) along with the reference x.sub.i and use the operator ψ(a.sub.x.sub.i,a.sub.y) (101) to concatenate the embeddings or to compute some distance between them. The output of said operator (101) is fed to the classifier (102) that can be trained in the supervised way as in the previous case. Alternatively, considering the operator ψ(a.sub.x.sub.i,a.sub.y) as a distance, one can apply a decision in (102) to this distance directly without a need to train a classifier. Finally, one can also envision a scheme when both the embedder and classifier are trained jointly in the end-to-end way.

    [0113] FIG. 2B extends the previous setup to a case where the authentication is based on the blueprint t.sub.i instead of the fingerprint. In particular, FIG. 2B shows a similar architecture like FIG. 2A, wherein the blueprint t.sub.i is used for the authentication instead of the fingerprint x.sub.i. At the training stage, it is assumed that the triplets {x.sub.i,t.sub.if.sub.i}.sub.i=1.sup.N are available. The fingerprints are mapped via the embedding operator g.sub.θ.sub.x (.Math.) (100) and the blueprint t.sub.i is projected via the mapper g.sub.θ(.Math.) (103). The training and testing are similar to the above described case of FIG. 2A. One particular case of the considered system might also include a case when the mapper g.sub.θ.sub.t(.Math.) is an identity operator. In this case, the operator g.sub.θ.sub.x(.Math.) is trained to produce the best possible estimate of blueprint t.sub.i from fingerprint x.sub.i according to the metric d(a.sub.t.sub.i,a.sub.x.sub.1) while maximizing the distance to fake d(a.sub.t.sub.i,a.sub.x.sub.i) with a.sub.t.sub.i=t.sub.i. The system is similar but the template t.sub.i has its own embedding operator.

    [0114] Such authentication process is intuitively simple, but represents at the same time a number of practical restrictions and faces several issues: [0115] 1) The fakes are rarely available at the training stage and the training on the original fingerprints only while interpreting them as opposite to fakes might lead in some cases to a wrong embedding space and vulnerability to fakes that are closer to the originals than the supposed original opponents; [0116] 2) even if some fakes of original objects were available at the training time, the actual fakes produced by the counterfeiter might be considerably different at test time from those used for the training; [0117] 3) the considered authentication is solely based on the availability of the fingerprint x.sub.i for each physical object which might be not the case in real applications due to high cost of the fingerprint enrollment for large scale applications, infrastructure and database management or other technical constraints; [0118] 4) the scheme presented in FIG. 2B is based on the authentication from the blueprint and partially resolves the above issues but in this case the available fingerprints are not taken into account at the authentication stage even if such were available, which surely is not optimal; [0119] 5) in both cases, the presence of paired blueprint-fingerprint and fakes is required although it is very impractical and costly to provide these.

    [0120] Therefore, both systems considered in FIG. 2 represent a possible solution to the authentication problem when the corresponding fakes are available at the training stage. Since this case represents mostly a theoretical interest due to the lack of fakes, it can be of interest to estimate the achievable system performance in a fully informed scenario. Thus it's practical usage is limited.

    [0121] To resolve these challenges, the method of physical object authentication according to the present invention is based on the fact that the presence of paired fake examples {x.sub.i, f.sub.i}.sub.i=1.sup.N or even unpaired examples {x.sub.i}.sub.i=1 and {f.sub.i}.sub.i=1.sup.N is not required at the training stage. This makes the proposed method better applicable to practical requirements and makes it more robust to potential broad variability and classes of fakes and attacks. Furthermore, to make the authentication better adaptable to the real cases where only blueprints {t.sub.i}.sub.i=1.sup.N are available for the authentication, which does not require complex enrollment of fingerprints from each physical object under the protection, or both some paired blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, if such an option exists in some medium and small scale applications, or even unpaired sets of blueprints {t.sub.i}.sub.i=1.sup.N and some number of fingerprints {x.sub.j}.sub.j=1.sup.J are available at the training, an authentication is considered that can incorporate all these situations as particular cases without a need to change the authentication architecture for each particular case. This makes the proposed architecture universal and scalable to various practical use cases.

    [0122] To this effect, at first place, an authentication method according to the present invention based only on the blueprint t.sub.i will be described in the following and, at second place, it will then be extended to the above cases, along with the introduction of the corresponding architecture and by addressing each case in details.

    [0123] FIG. 3 schematically illustrates a generalized authentication architecture according to the present invention of a physical object with the index i based on an acquired image y, referred to in the following also as a probe. The training is based on the pair {t.sub.i,x.sub.i}.sub.i=1.sup.N and no fakes are used for the training in comparison to the system presented in FIG. 2. The authentication only requires the reference blueprint t.sub.i to authenticate the probe y. We will consider two setups where the mapper (200) is a hand-crafted mapper, i.e., it is a data independent and untrainable, and learnable mapper, i.e., it is learned in a function of imposed constrained. Assuming the hand-crafted case, at the training stage, a hand-crafted mapper (200) produces an estimate {tilde over (t)}.sub.1 of the blueprint from the fingerprint x.sub.i, a multi-metric scoring module (230) computing various metrics such as l.sub.p distances, correlation, Hamming distance, moment matching or some embedded space distances, etc. between t.sub.i and {tilde over (t)}.sub.1 . The multi-metric scores are combined into a feature vector that is used as an input to the OC-classifier (250) and the classifier is trained accordingly. The mapper (200) represents hand-crafted operations ranging from the identity mapping, binarization using various deblurring, denoising, thresholding strategies, morphological processing operations, etc. . . . The fingerprints x.sub.i and probe y can be in a format of raw data converted to RGB, YUV or any color format or just a concatenation of raw color channels such as for example RG1G2B or any color sampling matrices of CCD or CMOS sensors. The OC-classifier (250) can be any classifier considered in the following description.

    [0124] It should be pointed out that the mapper (200) resembles some similarity with the mapper (100) in FIG. 2B, where the mapper (103) is an identity operation. At the same time, the difference resides in the resulting goal of training, given that the operator (100) of the method of FIG. 2B is trained in a contrastive way with respect to the fakes while the operator (200) of the method of FIG. 3 is trained to ensure an estimate of the blueprint from the fingerprint according to the defined metrics (230).

    [0125] In the case of learnable mapper (200) one can target to minimize the multi-metric (230) at the training stage and then to use the trained mapper to train the classifier (250). The testing stage is the same as described above. The role and importance of mapper (200) training will be further demonstrated in FIGS. 6 and 7, where the mapper (200) will be chosen in a hand-crafted way as the best estimator of the blueprints from the fingerprints.

    [0126] FIG. 4 schematically illustrates a generalized architecture for authentication according to the present invention of a physical object with the index i based on the probe y, and the object's blueprint t.sub.i and enrolled fingerprint x.sub.i of said physical object using a hand-crafted or learnable mapper (200), a multi-metric scoring module (230) computing various metrics such as l.sub.p distances, correlation, Hamming distance, moment matching, or some embedded space distances, etc. between blueprint t.sub.i and estimate {tilde over (t)}.sub.1 of the fingerprint x.sub.i and between {tilde over (t)}.sub.1 and {tilde over (t)}.sub.1, a multi-metric module (220) computing similar metrics between fingerprint x.sub.i and second view fingerprint x′.sub.i and the OC-classifier (250). The mapper (200) and the OC-classifier (250) are implemented in the same way as in the authentication architecture of FIG. 3. The main difference between the authentication considered in FIG. 4 and those in FIG. 3 consists in the additional usage of the fingerprint x.sub.i of the original object along with the blueprint t.sub.i at the authentication step. At the training stage, the pair of fingerprints x.sub.i and x′.sub.i acquired from the object along with the blueprint t.sub.i is used to produce the scores between the estimates of blueprints d.sub.k.sup.t({tilde over (t)}.sub.1 ,{tilde over (t)}.sub.1), k=1, . . . , K and between the fingerprints d.sub.l.sup.x(x.sub.i,x′.sub.i), l=1, . . . , L. The scores are combined and the one-class classifier (250) is trained similarly to the method shown if FIG. 3. At the test stage, given the defined mapper (200) and the trained classifier, a probe y which might originate from the original fingerprint x.sub.i or from some unknown fake f, the described multi-metric scores are computed and the classifier produces the decision on authenticity based on said multi-metric scores.

    [0127] To emphasize the importance of the mapper (200) and the role of multi-metrics in the methods according to FIGS. 3 and 4, several results on real data shall be presented. For this purpose, a case of printed security features is considered where the blueprint t.sub.i represents a random copy detection pattern of some specified density of black symbols such as shown in FIG. 5. The produced copy detection codes were printed on the industrial printed HP Indigo 5500 DS with the symbol size 5×5 at the resolution 812 dpi. Four types of fakes were produced using two copy machines: Fakes 1 white: RICOH MP C307 on white paper, Fakes 2 gray: RICOH MP C307 on gray paper, Fakes 3 white: Samsung CLX-6220FX on white paper and Fakes 4 gray: Samsung CLX-6220FX on gray paper. The verification is done using iPhone XS with the native camera application. The scanned original print representing the fingerprint x.sub.i and produced fakes fs are exemplified in FIG. 5B to 5F, respectively. The setting of copy machines were chosen with the embedded pre-processing to reduce the printed dot gain and to obtain perceptually similar appearance of fakes to the originals.

    [0128] The authentication with respect to the reference blueprint t.sub.i is shown in FIG. 6 and with respect to the reference fingerprint x.sub.i in FIG. 7.

    [0129] FIG. 6 schematically illustrates the inability of classical metrics used for the authentication architectures presented in FIGS. 3 and 4 and hand-crafted mapper (200) to reliably distinguish the originals from fakes based on the blueprint reference t.sub.i. The experimental results are based on the samples shown in FIG. 5. None of the considered metrics is capable to reliably authenticate the object. Otsu's binarization [76] was used as a typical example of a mapper (200) from the fingerprint to the blueprint. The results presented in this figure clearly demonstrate that none of the considered metrics is capable to reliably differentiate the originals from fakes based on measuring the proximity to the reference blueprint. Moreover, the results confirm that a basic mapper based on Otsu's binarization, as a representative of the natural hand-crafted mapper for the estimation of binary blueprints from the printed fingerprints, as well as different manually tuned thresholds used in most of state-of-the-art methods do not achieve perfect distinguishability between the considered classes of originals and fakes. The presented results clearly demonstrate that the sole usage of multi-metrics with the hand-crafted mapper (200) is not sufficient to obtain a good separability between the classes. That is why the mapper (200) should properly trained in a way explained below.

    [0130] FIG. 7 further extends the experimental study of the setup presented in FIG. 6 to the case where the physical reference based on the fingerprint x.sub.i was used instead of the blueprint t.sub.i. In contrast to the common belief that the data processing inequality should ensure higher mutual information between the nearest vectors forming the Markov chain, the obtained results presented in FIG. 7 do not provide such confirmation, which is in accordance with the results presented in FIG. 6. Among the different reasons, one can clearly see in FIG. 7 the main misconception between the interpretation of mutual information and used metrics. More particularly, the used metrics reflect only the second order statistical relationship between the observations while the mutual information reflects the relation on the level of distributions that include all moments. For this reason, one of the main differences of an authentication method according to the present invention to prior art methods resides in that the correct estimation of mutual information can provide considerable enchantment of the authentication accuracy. FIG. 7 underlines that none of the considered classical prior art metrics is capable to reliably authenticate the object. Furthermore, similarly to the results presented in FIG. 6, the use of handcrafted mapper (200) in a form of the Otsu binarization is not sufficient and the mapper (200) should be trained.

    [0131] Several multi-metrics were used and the 2D plots demonstrate the achieved separability between the classes of the original objects and four types of considered fakes in these metrics. Even under simple copy machine fakes none of pairs of the considered metrics is capable to produce a reliable separability between the classes for both the authentication based on the reference blueprint and fingerprint. In this particular case, it was found experimentally, that the pair of the Pearson correlation between the blueprint t.sub.i and probe y and the Hamming distance between the blueprint t.sub.i and binary quantized probe T.sub.Otsu(y) with the mapper (200) implemented via the Otsu thresholding selection produced the best among all considered cases yet imperfect separability between the classes. That is why this pair was chosen to exemplify the decision boundaries of the OC-classifier (250) trained on the pair of these metrics as shown in FIG. 8. FIG. 8 illustrates the decision boundary of the OC-classifier based on the RBF-SVM [77] trained only on the original data. As a metric space for the multi-metric scores the Pearson correlation between the fingerprint and blueprint and the Hamming distance between the blueprint and binarized fingerprint presented in FIG. 6 was used, for which one obtains the best possible separation between the classes of originals and fakes. One can clearly observe the overlapping between the class of the original objects with the classes of the fakes in the space of decision boundaries of the RBF-SVM one-class classifier. Therefore, at least for the considered example, the the obtained results demonstrate inability of the fixed mapper to provide reliable classification. The results shown in FIG. 8 as well as in FIGS. 6 to 7 explain the need to construct a trainable mapper (200) allowing to reliably distinguish originals from fakes.

    [0132] To resolve the problem of inability of the above considered classification criteria to reliably distinguish the fakes from the originals, the present invention discloses a method where the mapping between the fingerprints and blueprints is based on a trainable model. Moreover, instead of a simple minimization of multi-metrics between the estimated and reference blueprints to train the mapper, we will consider a generalized approach where the above strategy is just a particular case. Along this way, it is assumed that the blueprints and fingerprints are statistically dependent and governed from a joint distribution p(t, x). This assumption is easily justified in practice due to the processing chain presented in FIG. 1. This chain demonstrates that the fingerprints x.sub.i are produced from the blueprints t.sub.i via a complex process of manufacturing and imaging, the exact and accurate models of which are unknown as such. Instead of trying to build such a model in contrast to prior art for example models of printing [27, 28] or models of imaging sensors [90] that is a very complex task requiring a lot of know-how in manufacturing and imaging as well as adaption to each particular use case and device, we proceed with a machine learning approach where the physical model knowledge is not needed. In general, one can distinguish three system designs according to the present invention to achieve the above goal depending on the order of processing: (a) A so called direct path system, (b) a so called reverse path system and (c) a so called two-way path system that combines the direct and reverse paths systems. Each system is based on the mappings of the fingerprint/blueprint to blueprint/fingerprint, respectfully. To ensure that the mapping is correct and corresponds to the joint prior p(t, x) represented in practice by the training samples examples {t.sub.i,x.sub.i}.sub.i=1.sup.N and marginals represented by the unpaired examples {t.sub.i}.sub.i=1.sup.N and {x.sub.i}.sub.i=1.sup.N, it is assumed that the mapping should satisfy two constraints. The first constrain ensures that the produced estimates of blueprint/fingerprint should corresponds to their paired versions, if such are available, or otherwise follow the corresponding marginal distributions or satisfy both conditions simultaneously. The second constrain should ensure that the produced estimates contain enough information to reconstruct the corresponding original counterparts from them.

    Direct Path System

    [0133] The direct path authentication system is shown in FIG. 9 which, in particular, schematically illustrates a generalized principle of training in a direct approach for physical object authentication according to the present invention based on N pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, and J unpaired data samples of blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J. The paired and unpaired data are symbolized in FIG. 9 as being grouped by corresponding rectangles at the input of the system. The direct system comprises an encoder-decoder pair trained generally based on both paired and unpaired data. The encoder (200) produces an estimate {tilde over (t)} of the blueprint from the fingerprint x and the decoder (210) encodes it back to the estimate of the fingerprint {circumflex over (x)} to ensure the fidelity of the whole model. It should be pointed out that the printing model and imaging setup are unknown as such and the training is based on the training data only. Furthermore, fakes are not taken into account at the training of the OC-classifier (250). At the training stage 1, the training process is performed according to the loss function defined by the decomposition of the terms of mutual information as described in the following description to train the encoder-decoder pair. The estimate {tilde over (t)} of the blueprint is performed according to the metrics defined in the module (400) based on mutual information term I.sub.E(X; T). This module contains a set of multi-metric similarity distances (230) and discriminators (231) obtained from the variational decomposition of the mutual information as shown in the following description. Additionally, one can also use the existing mutual information estimators like those based on Donsker-Varadhan bound, e.g. just to mention for example mutual information neural estimator (MINE) [78]. It is proceeded with the variational decomposition presented in [79], since it allows to obtain an intuitive interpretability of obtained estimators and also technically tractable implementation. The similarity distance module (230) is based on the paired metrics while the discriminator (231) integrates an estimation of samples from two distributions that are generally unpaired. Examples of paired distances include but are not limited to the L2-norm, L1-norm, Lp-norm, inner product, cosine distance, Hamming distance, Pearson correlation, etc. . . . The discriminator estimates the proximity of two distributions represented by the samples {{tilde over (t)}j}.sub.j=1.sup.J and {t.sub.j}.sub.j=1.sup.J and can be implemented based on: (a) class probability estimation based on the density ratio estimation [80, 81], (b) divergence minimization [82] and (c) ratio matching [83, 84] or alternatively based on moment matching implemented based on kernels and known as the maximum mean discrepancy [85].

    [0134] The reconstruction of the fingerprint is performed according to the metrics defined in the module (500) that consists of similar paired similarity metrics (221) and unpaired discriminators (220) defining the approximation to mutual information term I.sub.E,D(T; X). The implementation of the metrics is similar as above.

    [0135] At the stage 2, the outputs of all paired and unpaired metrics are concatenated into a feature vector that serves as an input to the OC-classifier (250). It should be pointed out that complexity of the training of the OC-classifier based on the proposed feature vector in the proposed system is considerably lower as compared to training in methods based on the direct classification of samples in the high-dimensional space of input data. The training of classifiers in the input space is highly complex, even when using the dual form of representation in systems based on the OC-SVM. Therefore, by considering the input of the OC-classifier (250) as a concatenated vector of multi-metric scores, one can considerably reduce the complexity of the classifier training in the domain where the classes are better separable.

    [0136] The direct system is based on the encoder (200) and the decoder (210) that are trained jointly based on the maximization of two mutual information terms I.sub.E(X; T) and I.sub.E,D(T; X). The first term I.sub.E (X; T) represented by (400) and denotes the mutual information between the fingerprint and blueprint considered via the encoder (200) and decomposed as [79]:


    I.sub.E(X;T): =−D.sub.KL(p.sub.Data(t)∥p.sub.E(t);E)−H.sub.E(T|X),  (2)

    where D.sub.KL(p.sub.Data(t)∥p.sub.E(t); E)=D.sup.t{tilde over (t)}({tilde over (t)}) denotes the Kullback-Leibler divergence (KLD) between the blueprint data distribution p.sub.Data(t) and encoded one P.sub.E (t). We will refer to the KLD as a discriminator between two distributions p.sub.Data(t) and P.sub.E(t). The discriminator estimates the proximity of two distributions represented by the samples {{tilde over (j)}} and {t.sub.j}.sub.j=1.sup.J generated from these distributions and can be implemented based on: (a) class probability estimation based on the density ratio estimation [80], (b) divergence minimization [82] and (c) ratio matching [83] or alternatively based on moment matching implemented based on kernels and known as the maximum mean discrepancy [101].

    [0137] The term H.sub.E(T|X)=−E.sub.p(t,x)[log q.sub.E(t|x)] denotes the conditional entropy, where E.sub.p(t,x) [.Math.] denotes the mathematical expectation with respect to the distribution p(t, x). We define q.sub.E(t|x)∝e.sup.−λ.sup.t{tilde over (t)}d.sup.t(t,{tilde over (t)}) with λ.sub.t{tilde over (t)} to be a normalization parameter and d.sup.t(t, {tilde over (t)}) denotes a particular metric of similarity between t and {tilde over (t)} and {tilde over (t)}=f.sub.E(x) represents the deterministic part of encoder mapping. Therefore, H.sub.E(T|X)∝−λ.sub.t{tilde over (t)}d.sup.t(t, {tilde over (t)}). The corresponding blocks of (400) are denoted as (230) and (231). The subsidences of (230) and (231) corresponds to the different selection of multi-metric scores.

    [0138] The second term I.sub.E,D(T; X) represented by (500) denotes the mutual information between the encoded blueprint and targeted fingerprint considered via the encoder (200) and decoder (210) and decomposed as:


    I.sub.E,D(T;X):=−D.sub.KL(p.sub.Data(x)∥p.sub.D(x);E,D)−H.sub.E,D(X|T),  (3)

    [0139] where D.sub.KL(p.sub.Data(x)∥p.sub.D(x); E, D.sup.x{circumflex over (x)}({circumflex over (x)}), denotes the Kullback-Leibler divergence between the image fingerprint distribution p.sub.Data(X) and its reconstructed counterpart p.sub.D (x). The second term in the above decomposition is H.sub.E,D(X|T)=−E.sub.p.sub.Data.sub.(x)[E.sub.q.sub.E.sup.(t|x)[log p.sub.D(x|t)]]. Assuming p.sub.D(x|t)∝e.sup.−λ.sup.x{circumflex over (x)}.sup.d.sup.x.sup.(x,{circumflex over (x)}), with λ.sub.x{circumflex over (x)} denoting the normalization parameter and d.sup.x(x,{circumflex over (x)}) denoting some distance between x and {circumflex over (x)}=g.sub.D({tilde over (t)}) with {tilde over (t)}=f.sub.E(x), one can re-write H.sub.E,D(X, T)∝−λ.sub.x{tilde over (x)}d.sup.x(x, {tilde over (x)}). The corresponding blocks of (500) are denoted as (221) and (220).

    [0140] The direct path architecture training problem is based on maximization problem:

    [00001] ( E ^ , D ^ ) = arg max E , D I E ( X ; T ) + λ I E , D ( T ; X ) , ( 4 )

    that consists in finding the parameters of the encoder and decoder (Ê, {circumflex over (D)}) with λ denoting the Lagrangian multiplier controlling the trade-off between the two terms.

    [0141] The maximization problem (4) is reduced to a minimization problem using (2) and (3):

    [00002] ( E ^ , D ^ ) = arg min E , D Λ x ( E , D ) , ( 5 )

    where Λ.sup.x(E,D)=[D.sup.t{tilde over (t)}({tilde over (t)})+λ.sub.t{tilde over (t)}d.sup.t(t, {tilde over (t)})]+λ[D.sup.x{circumflex over (x)}({circumflex over (x)})+λ.sub.x{circumflex over (x)}d.sup.x(x,{circumflex over (x)})].

    [0142] The discriminators (231) and (221) are fixed at the above training. Once the parameters of the encoder and decoder are estimated at this stage the discriminators are updated and the next epoch of training is repeated.

    [0143] Once the encoder-decoder and discriminators are trained, the multi-metrics scores are used for the OC-classifier (250) training that finalizes the training of the direct path.

    [0144] At the testing stage of the direct path such as shown in FIG. 10, the system authenticates the probe y representing the fingerprint of an object to be authenticated with respect to the blueprint t.sub.i. For this purpose the probe y is encoded and decoded and the produced multi-metric scores are validated via the OC-classifier. FIG. 10 schematically illustrates a generalized principle of a testing or authentication stage in the direct path for authentication of a physical object i according to the present invention based on the probe y and blueprint t.sub.i of the object under the authentication. The authentication uses as an input the probe y and the reference blueprint t.sub.i and produces the output about the authenticity of y with respect to t.sub.i. The authentication architecture comprises the pre-trained encoder (200) and decoder (210), pre-training of said encoder and decoder being symbolized in FIG. 10 with the star sign, the multi-metric scoring (230) and (231) for the reconstructed template {tilde over (t)}.sub.1 and multi-metric scoring (220) and (221) for the reconstructed image {circumflex over (x)} and the OC-classifier (250) producing a decision based on the concatenated multi-metric feature vector. All components of the considered architecture are implemented as described in the context of FIG. 9.

    [0145] It should be pointed out that the direct system presented is a generalization of system presented in FIG. 3, where the mapper (200) corresponds to the encoder (200) in FIG. 9. At the same time, the training of the encoder in FIG. 9 is regularized by the need to ensure a reliable reconstruction of the fingerprint from the estimated blueprint whereas such a constraint is absent in the system shown in FIG. 3. In addition, the system explained in FIG. 9 is trained based on the above decomposition of the mutual information, which includes both the similarity metrics and discriminators, whereas the system explained in FIG. 3 uses just one term for the paired data whilst the unpaired data are not included in the training of FIG. 3.

    Reverse Path System

    [0146] FIG. 11 schematically illustrates a reverse system according to the present invention comprising an encoder-decoder pair trained based on both paired and unpaired data. The order of processing is reverse with respect to the direct system. The decoder (210) produces an estimate {tilde over (x)} of the fingerprint from the blueprint t and the encoder (200) encodes it back to the estimate {circumflex over (t)} of the blueprint to ensure the fidelity of the whole model. The OC-classifier (250) produces a decision based on the feature vector. The training of the reverse system is similar to the direct one. At stage 1, the training process is performed according to the loss function defined by the decomposition of the terms of mutual information to train the decoder-encoder pair. The estimate {tilde over (x)} of the fingerprint is performed according to the metrics defined in the module (600) that serves as an approximation to mutual information term I.sub.D(X; T). This module contains a set of multi-metric similarity distances (220) and discriminators (222). The reconstruction of the blueprint {circumflex over (t)} is performed according to the metrics defined in module (700) that comprises similar paired similarity metrics (230) and unpaired discriminators (232) defining the approximation to mutual information term I.sub.D,E(T; X). The implementation of metrics is similar as described above. At stage 2, the outputs of all paired and unpaired metrics are concatenated into a feature vector that serves as an input to the OC-classifier (250).

    [0147] The reverse system is based on the encoder (200) and the decoder (210) that are trained jointly based on the maximization of two mutual information terms I.sub.D(X; T) and I.sub.D,E(T;X). It represents a reversed version of the direct system. The first term I.sub.D (X; T) represented by (600) denotes the mutual information between the fingerprint and blueprint considered via the decoder (210) and decomposed as [79]:


    I.sub.D(X;T):=−D.sub.KL(p.sub.Data(x)∥p.sub.D(x);D)−H.sub.D(X|T),  (6)

    where D.sub.KL (p.sub.Data(X)∥p.sub.D (x); D)=D.sup.x{tilde over (x)}({tilde over (x)}) denotes the Kullback-Leibler divergence between the fingerprint data distribution p.sub.Data(X) and decoded one p.sub.D(x). The term H.sub.D(X|T)=−E.sub.p(t,x) [log p.sub.D (x|t)] denotes the conditional entropy, where E.sub.p(t,x) [.Math.] denotes the mathematical expectation with respect to the distribution p(t, x). We define P.sub.D(x|t)∝e.sup.−λ.sup.x{tilde over (x)}.sup.d.sup.x.sup.(x,{tilde over (x)}) with λ.sub.x{tilde over (x)} is a normalization parameter, d.sup.x(x, {tilde over (x)}) denotes a particular metric of similarity between x and {tilde over (x)} and {tilde over (x)}=g.sub.D (t). Therefore, H.sub.D(X|T)∝−λ.sub.xxd.sup.x(x,{tilde over (x)}).

    [0148] The second term I.sub.D,E (T; X) represented by (700) denotes the mutual information between the decoded fingerprint and targeted blueprint considered via the decoder (210) and encoder (200) and decomposed as:


    I.sub.D,E(T;X):=−D.sub.KL(p.sub.Data(t)∥p.sub.E(t);E,D)−H.sub.D,E(T|X),  (7)

    where D.sub.KL(p.sub.Data(t)∥p.sub.E(t); E, D)=D.sup.t{tilde over (t)}({circumflex over (t)}), denotes the Kullback-Leibler divergence between the image fingerprint distribution p.sub.Data(t) and its reconstructed counterpart PE(t) with {circumflex over (t)}=f.sub.E(g.sub.D(t)).

    [0149] The second term H.sub.D,E (T|X)=−E.sub.p.sub.Data.sup.(t)[E.sub.p.sub.D.sup.(x|t)[log q.sub.E(t|x)]] denotes the conditional entropy and corresponds to the estimation of the blueprint from the synthetic fingerprint produced by the decoder. Assuming q.sub.E (t|x)∝e.sup.−λ.sup.t{tilde over (t)}.sup.d.sup.t.sup.(t,{circumflex over (t)}), with λ.sub.t{circumflex over (t)} denoting the normalization parameter, d.sup.t(t, {circumflex over (t)}) denoting some distance between t and {circumflex over (t)}=f.sub.E({tilde over (x)}), with {tilde over (x)}=g.sub.D(t), we re-write H.sub.D,E(T,X)∝−λ.sub.t{tilde over (t)}d.sup.t(t,{circumflex over (t)}).

    [0150] The reverse path training problem is based on maximization problem:

    [00003] ( E ^ , D ^ ) = arg max E , D I D ( T ; X ) + λ I D , E ( X ; T ) . ( 8 )

    [0151] The maximization problem (8) is reduced to a minimization problem using (6) and (7):

    [00004] ( E ^ , D ^ ) = arg min E , D Λ t ( E , D ) , ( 9 )

    where κ.sup.t(E,D)=[D.sup.x{tilde over (x)}({tilde over (x)})+λ.sub.x{tilde over (x)}d.sup.x(x,{tilde over (x)})]+α[D.sup.t{circumflex over (t)}({circumflex over (t)})+λ.sub.t{circumflex over (t)}d.sup.t(t, {circumflex over (t)})], where α is a Lagrangian coefficient.

    [0152] Once the encoder-decoder and discriminators are trained, the multi-metrics scores are used for the OC-classifier (250) training that finalizes the training of the reverse path.

    [0153] At the testing stage of the direct path such as shown in FIG. 12, the system authenticates the probe y representing the fingerprint of an object to be authenticated with respect to the blueprint t.sub.i. For this purpose the blueprint t.sub.i is decoded in the estimated fingerprint {tilde over (x)}.sub.1 that is compared with the probe y via the produced multi-metric scores that are validated via the OC-classifier. FIG. 12 schematically illustrates a generalized principle of a testing or authentication stage in the reverse system according to the present invention. The authentication uses as an input the probe y and the reference blueprint t.sub.i and produces the output about the authenticity of probe y with respect to blueprint t.sub.i similar to the direct system. The authentication architecture comprises the pre-trained encoder (200) and decoder (210), pre-training of said decoder being symbolized in FIG. 12 with the star sign, the multi-metric scoring (220) and (222) for the reconstructed fingerprint {tilde over (x)}.sub.1 and the OC-classifier producing a decision based on the concatenated multi-metric feature vector. All components of the considered architecture are implemented as described in the context of FIG. 9.

    Two-Way System

    [0154] The two-way system is based on the direct and reverse paths and shown in FIG. 13 which, in particular, schematically illustrates a generalized principle of training in a two-way approach for physical object authentication according to the present invention based on N pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, and J unpaired data samples of {t.sub.j}.sub.j=.sup.J and {x.sub.j}.sub.j=1.sup.J. The paired and unpaired data are symbolized in FIG. 9 as being grouped by corresponding rectangles. The training is based on decomposition of 6 mutual information terms to cover the multi-metric scores based on the similarity metrics and discriminators computed between: (a) x and {tilde over (x)}, (b) x and {circumflex over (x)}, (c) {tilde over (x)} and {circumflex over (x)}, (d) t and {tilde over (t)}, (e) t and {circumflex over (t)} and (f) {tilde over (t)} and {circumflex over (t)}. All these metrics are computed in block (2000). The training of the two-way system can be performed as an alternation procedure between the direct path represented by the chain x.fwdarw.{tilde over (t)}.fwdarw.{circumflex over (x)} and the reverse path represented as t.fwdarw.{tilde over (x)}.fwdarw.{circumflex over (t)} or two-ways simultaneously with the corresponding metrics and discriminators. The block (2000) outputs a multidimensional feature vector combining all the above multi-metric scores. At the second stage, the OC-classifier (250) is trained using the above feature vector as an input. The training of two-way system is based on N pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, and J unpaired data samples of {t.sub.j}.sub.j=1.sup.J and {x.sub.j}.sub.j=1.sup.J. The training procedure consists of two stages. At the first stage, given the training pairs {t.sub.i,x.sub.i}.sub.i=1.sup.N and unpaired samples {t.sub.j}.sub.j=1.sup.J and {x.sub.j}.sub.j=1.sup.J, the encoder and decoder pair is trained jointly with the discriminators in the direct and reverse cycles. Alternatively, one can formulate the common loss:


    Λ.sup.Two-way(E,D)=Λ.sup.x(E,D)+βΛ.sup.t(E,D),  (10)

    where β denotes the Lagrangian multiplier, as a combination of the direct and reverse objectives. At the same time, the two-way system is not just a sole combination of the previous objectives. It also includes the cross terms from two modalities. It can be formalized as a decomposition of 6 terms of mutual information as pointed out above. Therefore, the mutual information includes: four terms considered in the direct and reverse parts, i.e. I.sub.E(X; T), I.sub.E,D(T; X), I.sub.D(X; T) and I.sub.D,E (T; X), and two cross-terms between the direct and reverse parts denoted as I.sub.E,D,D(T; X) and I.sub.D,E,E (T; X). Each mutual information is decomposed into two terms including the paired similarity metric and discriminator term computed using multi-metric approximations similar to the direct and reverse systems.

    [0155] The authentication stage represents the testing and is shown in FIG. 14. Given the trained encoder-decoder pair, the discriminators, chosen the similarity metrics and OC-classifier according the description provided above, the authentication is based on the authentication of probe y versus the reference blueprint t.sub.i and fingerprint x.sub.i. The multi-metrics are computed in (2000) and the feature vector comprising scores produced by these metrics is fed to the OC-classifier (250). It is important to note that the considered system represents an important countermeasure against both hand-crafted (HC) and machine learning (ML) based attacks by producing a trade-off between the decision produced from the reference blueprint t.sub.i and reference fingerprint x.sub.i. This trade-off is based on the following considerations.

    [0156] Assuming that a fake originates from the HC and ML attacks targeting the estimation of t.sub.i from x.sub.i with the further accurate reproduction of the estimated blueprint and thus creating a fake object, the defender is interested to introduce some distortions during the reproduction of blueprint t.sub.i into a physical object x.sub.i. These distortions will prevent the HC and ML attacks from an accurate estimation of {tilde over (t)}.sub.1. However, at the same time, this creates a considerable difference between the blueprint t.sub.i and its fingerprint x.sub.i even for the authentic objects that makes the distinction between the originals and fakes challenging. That is why the usage of a sole direct system based authentication might be insufficient in view of low accuracy of prediction of {tilde over (t)} from y. At the same time, the usage of a sole reverse system based authentication is based on the generation of {tilde over (t)}.sub.1 from t.sub.i and its comparison with the probe y. If the level of distortions is high and the distortions are produced at random, the accuracy of prediction might also reduce. That is why the two-way system has several additional options when x.sub.i is compared with y directly and {tilde over (x)}.sub.1 is compared with {circumflex over (x)}.sub.1 and with x.sub.i. The same is valid for the fingerprint part. The advantage of the two-way system is that all possible combinations are present at the multi-metric feature vector and the OC-classifier can automatically choose such a metric or dimension representing the most informative component for a particular case of interest.

    [0157] To demonstrate the advantages of such multi-metric system in practice, a simple setup of the direct path:


    Λ.sup.x(E,D)=d.sub.l.sub.1(t,{tilde over (t)})+λd.sub.l.sub.1(x,{circumflex over (x)})

    was trained, i.e. based on only similarity metrics and no discriminators. Once the encoder-decoder pair was trained, the feature vector combining several multi-metrics was constructed.

    [0158] FIG. 17 presents a 3D visualization of class separability for the direct system in multi-metric space of: d.sub.pearson(y, t.sub.i)—Pearson metric computed between the probe and blueprint, d.sub.l.sub.2(y, {circumflex over (x)})—the Euclidian metric between the probe and encoded-decoded probe via the trained system and d.sub.Hamming(T.sub.Otsu(y), t.sub.i)—the Hamming distance between the binarized probe via the Otsu's thresholding of probe and blueprint.

    [0159] FIG. 18 shows a t-SNE computed from a concatenation of four multi-metrics: the three from FIG. 17 and additionally d.sub.l.sub.1 (y, t.sub.i) representing the L1-norm between the probe and the blueprint d.sub.Hamming (t.sub.i,{tilde over (t)}). In both cases, one can observe errorless separability between the classes of originals and fakes. As in the previous case of FIG. 17, this is achieved although no information about the fakes has been used and the encoder-decoder was trained only on the original data, all authentication being performed only with respect to the blueprint.

    [0160] Furthermore, the method according to the present invention can be also trained not only on the original paired data but also on the unpaired data. To exemplify this possibility, we will consider the reverse path whereas the direct path is applied symmetrically as demonstrated above. The methods representing the reverse path of the proposed method for the unpaired data are presented in FIGS. 15 and 16. The selection of the reverse path for the demonstration purposes has two objectives. First, the option of unpaired training as such with its straightforward application to the authentication problem will be demonstrated. Secondly, it will be demonstrated that one can use either paired or unpaired reverse systems to generate synthetic samples of original fingerprints and fakes from the blueprints only. Such a generation of synthetic samples for any new blueprint creates a lot of practical advantages that will be discussed below in details.

    [0161] FIG. 15 schematically illustrates a reverse system training based on unpaired blueprint t and several fingerprints x.sub.1, . . . , x.sub.K defining a targeted fingerprint {tilde over (x)} which should represent the input blueprint t on the output of the decoder (210). These fingerprints, respectively images x.sub.1, . . . , x.sub.K define the expected appearance of the blueprint t in the targeted fingerprint {tilde over (x)}. The encoder (200) regenerates the blueprint {circumflex over (t)} from the targeted fingerprint {tilde over (x)} and the corresponding metrics are applied to ensure the similarity of generated {tilde over (x)} to the wished class and also the correctness of decoding of the blueprint {circumflex over (t)}. The features determining the appearance of the blueprint are extracted by a set of encoders (2001) that consists of several downsampling convolutional layers and several ResNet layers [73]. The output of these encoders are fused in a common vector in block (251). The decoder (210) and encoder (200) have latent representations {tilde over (z)}.sub.t and {circumflex over (z)}.sub.t. Both the encoder and decoder share a similar structure that can be implemented either in a form of U-net where {tilde over (z)}.sub.t and {circumflex over (z)}.sub.t represent the bottleneck layers or as internal encoder-decoder pairs. The internal encoders of (210) and (200) can be implemented as downsampling CNNs followed by several ResNet modules or FLOWs while the internal decoders of (210) and (200) can be implemented as ResNet modules or Flows followed by the upsampling CNN modules. The internal decoder of (210) also performs a function of fusion of both latent representations {tilde over (z)}.sub.t and {tilde over (z)}.sub.x that can be implemented either by the concatenation, product or centring and normalization by {tilde over (z)}.sub.t and then adding mean of {tilde over (z)}.sub.x and scaling by the standard deviation output of {tilde over (z)}.sub.x. Thus, the role of the internal decoder of (210) is to fuse two presentations and to generate a new synthetic fingerprint {tilde over (x)}. To validate the correctness of generated instance {tilde over (x)}, the proposed system has several levels of control based on observable data and latent data. These levels of control should validate that the generated {tilde over (x)} contains the information about particular t, resembles the statistics of general fingerprints and at the same time the features of particular group x.sub.i and resembles the statistics of of general x. To ensure that {tilde over (x)} contains the correct information about a particular t that serves as the input to (210), the encoder (200) re-estimates {circumflex over (t)} from {tilde over (x)}. The fidelity of this generation in the observation space is ensured by the similarity metric (230) with respect to the input t and also by block (232) with respect to the statistics of general t both combined in the module (700). At the same time the fidelity of t with respect to the latent space representation {tilde over (z)}.sub.t from {circumflex over (z)}.sub.t and it decoded from {tilde over (x)} is ensured by the similarity metric (254) and with respect to the latent representation of generic textracted in (2004) in the discriminator (255). The fidelity of generated synthetic {tilde over (x)} with respect to the general class of x in the observation space is validated by the discriminator (222) while in the latent space the discriminator (252) ensures the similarity of latent representations {circumflex over (z)}.sub.t extracted in (2003) from {tilde over (x)} extracted from generic {tilde over (x)} in the encoder (2002). Finally, the fidelity to a targeted set of x.sub.i is ensured by the similarity metric (253) that requires that the integral vector {tilde over (z)}.sub.x is close to {circumflex over (z)}.sub.x, i.e., a new generated instance x contains the desired appearance and statistics extracted from a set x.sub.i. It is important to point out that discriminator (250) ensures the maximization of differences in the latent representation z.sub.t and {tilde over (z)}.sub.x, i.e. that the vectors should not contain the same features.

    [0162] FIG. 16 schematically illustrates a reverse system training based on the encoding of the blueprint t and a set of parameters determining the appearance of a targeted fingerprint {tilde over (x)} according to printer settings (264) and phone settings (265). The scheme is similar to the one presented in FIG. 15. The main difference consists in that the feature vectors determining the appearance of targeted fingerprint k are defined in form of parameters whereas in the scheme according to FIG. 15 the feature vectors are defined in a form of fingerprints x.sub.1, . . . , x.sub.K. In particular, in the scheme according to FIG. 16, the targeted fingerprint {tilde over (x)} are encoded from (264) by block (260) consisting of a fully connected layers and optimally resent layers producing a feature vector {tilde over (z)}.sub.pr for the printing setting and from (265) encoded in (262) into {tilde over (z)}.sub.ph for the phone settings. Blocks (261) and (263) denotes the similarity metrics between the corresponding feature vectors ensuring the fidelity of data encoding, i.e. that a new generated vector {tilde over (x)} contains the desired features of printer and phone settings. Blocks (222) and (252) are optional depending on the availability of the corresponding examples for the desired printer and phone settings. If such are no present, these blocks are excluded.

    Generation of Synthetic Examples

    [0163] A system according to the present invention can also be used for the generation of synthetic samples of both original fingerprints and fakes. The availability of synthetic samples might be of a great advantage to enhance the accuracy of authentication as it will be explained below. For example, the trained decoder (210) of systems presented in FIGS. 12, 14, 15 and 16 can be used for this purpose for the mapping t.fwdarw.{tilde over (x)}. The trained system can be trained either on the original fingerprints or also on a small amount of fakes. Typically about 100 examples would greatly suffice to train such a system. The system can be also trained on different original samples taken by different models of imaging devises and in various imaging conditions to simulate the variability and impact of these factors onto the intra-class variability. Furthermore, the variability of manufacturing can be easily simulated to investigate the factor of equipment aging, deviations between different types of equipment, or even within the same class of equipment or impact of other various factors including the human impact factor. Finally, the original fingerprints can be also simulated as fakes ones, even if the direct pairs are not available. Since this situation is quite rare in practice, we will focus on the case when the generation of synthetic examples is performed from the blueprints.

    [0164] At the same time, the parameters determining the training of the system such as the above mentioned Lagrangian coefficients can be chosen depending on the wished proximity of the produced synthetic samples to the reference ones. To exemplify these possibilities, we will use the two-way system represented by the direct path:


    Λ.sup.x(E,D)=d.sub.l.sub.1(t,{tilde over (t)})+λd.sub.l.sub.1(x,{circumflex over (x)})

    and by the reverse path:


    Λ.sup.t(E,D)=d.sub.l.sub.1(t,{circumflex over (t)})+λd.sub.l.sub.1(x,{tilde over (x)}).

    [0165] The encoder and decoder of this system are implemented as U-NET architecture for the demonstration purposes. This is a very simple system with the fast convergence and we train it on the originals and four types of fakes presented in FIG. 5 with three parameters λ=0.1,λ=1.0,λ=25.0. No knowledge about printing and imaging equipment was used and the system was trained from 120 samples of each group using basic augmentations such as flipping, slight rotations and non-linear contrast modifications. The trained decoder (210) is used to produce new synthetic fingerprints for the blueprints unseen at the training stage.

    [0166] Several examples of generated synthetic originals are shown in FIG. 19 and fake 1 and fake 3 in FIGS. 20 and 21, respectively. In particular, FIG. 19 presents an example of synthetic generation of fingerprints from given blueprints for various parameters controlling the proximity to the original data. The generated synthetic fingerprints are compared to original fingerprints enrolled from the physical objects and FIG. 19 shows that the synthetically generated fingerprints closely resemble the original ones. FIG. 20 illustrates the generation of fakes of type 1 from digital blueprints. The direct system was trained on the blueprints and fakes of type 1. The synthetic samples of fakes are compared to the original fingerprints and their real fakes. FIG. 21 shows the same results for fakes of type 3 that have different appearance as compared to fakes of type 1. It is demonstrated that the system can also very closely simulate fakes of this type, which can be applied to any new blueprint. The generation was performed for the test datasets and the used physical references were not used at the training stage and presented only for the validation purpose. The generated samples resembles very close visual similarity to the true physical data.

    Usage of Generation of Synthetic Examples

    [0167] The generated synthetic samples simulating both originals and fakes can be used in several ways for the enhanced authentication. To demonstrate these possibilities without sake of generality we will assume that only original fingerprints are available while collecting real fakes from physical objects represents a great challenge in view of the large variability of possible attacking strategies. At the same time, to acquire 100-200 images from the produced objects does not represent any significant time or cost engagement in view of the existing quality control stages at many manufacturing processes. Furthermore, we will only consider the paired setup in view of the above whereas the consideration of the unpaired system is also straightforward according to the above considerations.

    [0168] Therefore, given the training dataset {t.sub.i,x.sub.i}.sub.i=1.sup.N, the above described synthetic sample generation system is trained only on the originals with three parameters λ=1,λ=10,λ=25. New unseen at the training stage samples are passed via the trained decoders to map t.fwdarw.{tilde over (x)}. To visualize the effect of training, the latent space of a fully supervised classifier trained to classify the originals and four types of fakes was used. That is why the latent space of this classifier reflects the relative location of the manifolds of considered classes. The synthetic samples for three values of λ=1,λ=10,λ=25 are passes via this classifier and the latent space representing the pre-final layer before the soft-max output is visualized in a from of t-SNE diagram in FIGS. 22A to 22C which illustrate various augmentation strategies for the synthetic sample generation by the system according to the present invention. All of these figures reproduce the t-SNE visualization of original samples, four types of considered fake samples and several types of synthetic samples generated: (a) within the manifold of original samples to simulate the intra-class variability of fingerprints due to the manufacturing and acquisition, (b) between the original samples and four types of fake samples and (c) yet representing another st and-alone class of fake samples. The t-SNE is computed from the latent space vector of a supervised classifier trained on the original and four types of fake samples. Thus, three possible operational cases how the synthetic samples can be used to enhance the authentication performance are described hereafter.

    [0169] The first operational case is shown in FIG. 22A. It corresponds to the situation when λ=1, i.e., the generated synthetic samples are close to the manifold of the originals. It should be pointed out that the fake samples have not been used at all for the training of this generator and were only used for the visualization purposes to highlight the geometry of classes in the considered latent space. Therefore, the generated synthetic fakes complement the class of original fingerprints and can be used to simulate the intra-class variability due to the various mentioned factors. Varying the parameter λ. one can generate a lot of examples even for the same blueprints. Once the set of physical and synthetically generated originals are ready, the described authentication system can be trained on this joint set. In this way the OC-classifier will be informed to include all these samples into a common decision region. The overall effect of training is two fold. The informed encoder-decoder pair will better handle the distinguishably between the augmented original class and future fakes and deduce the corresponding feature vector. The informed OC-classifier will operate on this vector and minimize the probability of miss.

    [0170] The second operational case is shown in FIG. 22B. It is obtained for λ=10 and schematically demonstrates a case when the synthetically generated samples are located between the manifold of originals and all considered fakes. Varying the above parameter, one can situate the synthetic samples very close to the originals. In this case, one can assume that the generated samples represent the “worst case” fakes to the originals.

    [0171] Finally, the third operational case shown in FIG. 22C corresponds to the case when λ=25. This selection demonstrates an extreme case, when the generated samples are far away from the originals and can be considered as a standalone group of fakes. Obviously, varying A, one can populate the manifold with the controlled level of proximity of synthetic samples to the originals.

    [0172] The two last cases represent several possibilities how the synthetic samples can be used for the authentication enhancement.

    [0173] As the first option one can use the physical original fingerprints and generated “fakes” to train a supervised binary classifier. Assuming that the fakes are generated closely to the manifold of the originals and considered as the worst case fakes in the sense of proximity, the classifier decision boundary trained on these two classes will also reliably distinguish the other types of fakes that are on the large “distance” from this decision boundary. The experimental results validate that such a classifier can robustly reject all fakes even without knowing the origin of the physical fakes and without having seen them at training stage.

    [0174] As the second option one can train the triple loss system shown in FIG. 2 on the triplets of original fingerprints representing two views x.sub.i and x′.sub.i acquired from the physical objects and synthetic fakes represented as f.sub.i={tilde over (x)}.sub.1. Furthermore, to simultaneously include the intra-class variability for the originals the class representing second view x′.sub.i can be augmented as described in FIG. 22A. Additionally, the class of fakes can contain several fakes per item and the tripple loss network can trained as for example based on multi-class N-pair loss [87] or NT-Xent loss [88].

    [0175] As another embodiment, one can use the synthetic examples and train the direct, reverse or two-way system in a contrastive way. An example of such a contrastive training of the direct system is schematically shown in FIG. 23 which schematically illustrates the training step of a system according to the present invention on original and fakes samples. The fakes samples can originate from physical acquisitions, if such are present, or from synthetically generated samples. At stage 1, the training of the authentication system based on contrastive learning with real or synthetically generated fakes is performed based on the maximization of terms I.sub.E(X; T)(400) and I.sub.E,D(T; X)(500) with the minimization of terms simultaneously minimizing the mutual information term I.sub.E(F; T)(401) between the estimates of fingerprints produced from the fakes and the mutual information term I.sub.E,D(T*; X)(501) between generated images from fakes and original fingerprints. Finally, at stage 2, the OC-classifier (250) is trained on the outputs of blocks (500) and (400), which form a feature vector used for the authentication. Obviously, if the physical fakes are available, they can be included into the training as well without a need to change an architecture. At the stage 1, the encoder-decoder pair and corresponding discriminators are trained based on contrastive learning with real or physically augmented fakes. The training process assumes the presence of paired training examples of blueprints t, fingerprints x and fakes f. The unpaired situation is also included in this generalized setup. The training process is based on the maximization of mutual information terms I.sub.E (X; T)(400) and I.sub.E,D(T; X)(500) according to the decomposition considered in part of the direct system while simultaneously minimizing the information I.sub.E (X; T) between the estimates of fingerprints produced from the fakes and I.sub.E,D(T*; X) between generated images from fakes and original fingerprints. All mutual information term are decomposed as above on the corresponding conditional entropies and KLD terms.

    [0176] It is important to point out that the maximization and minimization of the above mutual information terms is performed on the shared encoder (200) and decoder (210). Therefore, both the encoder and decoders are trained to provide the maximum close reconstruction to the original class, if the input resembles to be an original and otherwise to the class of fakes. Finally, at the stage 2, the outputs of blocks (500) and (400) form a feature vector that is used for the training of the OC-classifier (250).

    Security and Fingerprint-Based Authentication

    [0177] In addition, it should be noted that the considered pairs (t.sub.i,x.sub.i) can come from either a physical distribution p(t, x) or from the pair-wise assignments when each physical fingerprint x.sub.i is assigned to some blueprint t.sub.i. In this case, the assigned blueprint t.sub.i might be generated from some distribution and can represent some sort of a random key. This additionally creates a sort of security and protection against the ML attacks when the attacker might have an access only to the fingerprints x.sub.i acquired from the authentic objects while the blueprints can be kept in secret. The attacker can generally proceed with the uncoupled training, if the distribution from which the blueprints are generated is known as opposed to the defender who trains the authentication system in the supervised way based on the available pairs (t.sub.i,x.sub.i). The supervised training will produce higher accuracy and lead to the information advantage of the defender over the attacker.

    [0178] In another embodiment of the method according to the present invention, one can consider a triple of blueprint, secrete key and fingerprint as a combination of the above disclosed methods.

    [0179] In another embodiment of the method according to the present invention, the authentication system might have an access to the fingerprints only. Such a situation is typical for example in biometrics or natural randomness based PUFs applications.

    [0180] In still another embodiment of the method according to the present invention, the authentication system might have an access to the blueprint only. In any of these embodiments disclosed above, the blueprint may additionally be secured and kept secret by any kind of securing means adapted for this purpose, given that manufacturers usually don't wish to disclose in detail templates used for production of their products. For example, at the stage of generating, providing or acquiring the blueprint t.sub.i, the latter may be modulated by use of a secret key k, by use of a secret mapping, by use of a space of secret carriers of a transform domain or the like, such as to produce a modulated blueprint t.sup.s.sub.i which isn't easily accessible to any potential attacker and addresses security issues on the side of the manufacturers of the objects to be authenticated.

    Encoder and Decoder Architectures

    [0181] The encoder and decoder in the considered authentication problem can be implemented based on the same architectures. The encoder receives as an input the fingerprint and outputs the blueprint and the decoder receives as an input the blueprint and outputs the fingerprint. Therefore, to proceed with a general consideration we will assume that the input to the encoder/decoder is a and the output is b.

    [0182] The encoder/decoder structure can be deterministic, i.e. perform one-to-one mapping of a to b, or stochastic when for one input a the encoder/decoder might generate multiple b.sub.s. In the deterministic case, the encoder/decoder can be implemented based on for example U-NET architecture [74], several CNN downsampling layers followed by several ResNet layers acting as a transformer of distribution and several CNN upsampling layers [98], where the ResNet transformation layers can be also replaced by other transformation models such as normalizing FLOW [102], neural network FLOWS [103] or similar architectures. All these structured can be summarized as the architectures consisting of conv-layers, transformation layers and decov-layers with the particularities how to implement each of them for a a particular problem of interest. The training of the encoder/decoder is also based on an element of stochasticity introduced by the permutations of input data based on sample-wise non-linear transformations, addition of noise, etc., filtering as well as geometrical transformations.

    [0183] The stochastic encoder/decoder structure is additionally complemented by a source of randomness that can be injected at the input, in the latent space or at the decoder by concatenation, addition, multiplication, etc. operations.

    [0184] The discriminators can be considered on the global level considered in the blueprint-fingerprint as whole or on the local level applied to parts of blueprint-fingerprint with the corresponding fusion of local discriminators' outputs.

    [0185] The proposed method can be used in a number of applications. Without pretending to be exhaustive in the following overview, just a few of them will be named by assuming that similar setups are easily recognized by a person skilled in the art.

    [0186] The authentication of watches is an example of use of the method for authentication of physical objects described above and, in particular, typically represents an application of the method to authentication of batch objects, but may also represent an application of the method to authentication of individually created objects, like may be the case for certain types of luxury watches produced only in very limited numbers or even only as a single piece, even though production is of course based on a blueprint. In general, the authentication of watches and of any other type of (high value or luxury) products is based on the measurement of the proximity of the manufactured watch/product to the blueprint, where the blueprint is considered as a common design for the batch of watches/products of a given model, preferably in combination with individual features of each watch/product which are present in the fingerprint and which correspond to the physical unclonebale features of an individual watch/product within said batch of watches/products. These features might be of both natural and/or artificial origin, such as mentioned in the introduction of the present patent application in the context of describing four basic groups of prior art approaches. Authentication of watches may be performed from the front and/or back sides of the watch and/or of its components, for example such as seen through the watch glass or through a skeleton back side of a watch, where the blueprint represents the artwork design given in any suitable encoding form and the fingerprint represents signals acquired from a physical object. The method can be applied to watches and/or its components indifferently from which materials these are produced. Preferably, the authentication concerns the elements of the watch design where both the blueprint-fingerprint pairs are available in any form suitable for easy and robust verification. The authentication can also link the joint authentication of elements of the watch and of the watch packaging or the watch and a corresponding watch certificate. If so required, some extra synchronisation based on the watch design might be added. Imaging of the watch may be realized by a mobile phone equipped with a camera, a portable microscope, or any other device adapted for this purpose. Furthermore, the imaging modalities of the acquisition device may be adapted to provide for acquisition of probe signals through the watch glass without the necessity to disassemble the watch for performing the authentication.

    [0187] The authentication of packaging in any applications requiring the security features against counterfeiting and brand protection when the blueprint represents the design of artwork encoded in any graphical format and the fingerprint represents signals acquired from the printed packaging. The packaging can be considered but not limited to a primary packaging such a syringe, a capsule, a bottle, etc., and a secondary packaging such as a box, a special shipment box or a container.

    [0188] The authentication of banknotes in any form of printing including the embedded security features.

    [0189] The authentication of elements of designs represented by various encoded modalities such as 1D and 2D codes, elements of design including the halftone patterns represented in any form of reproduction in black and white and color representation, security elements representing various special patterns difficult to clone or reproduce ranging from simple random patterns to complex guilloche ones, or special security taggants.

    [0190] The authentication of printed text, logos and stamps reproduced on any documents such as contracts, certificates, forms, etc. . . .

    [0191] The authentication of holograms in any form of reproduction.

    [0192] The authentication of payment cards in any part of graphical and embedded elements of text, design, chips, etc. . . .

    [0193] The authentication of identity documents includes but not limited to identification documents such as I.sub.D cards, passports, visas, etc. when the blueprint can be represented by human biometrics stored in printed form or on storage device and the fingerprint representing signals acquired from person.

    [0194] At the same time, the above examples do not exclude that the proposed methods are applicable to many kinds of products, which are (but are not limited to) the following: anti-counterfeiting labels or packaging, boxes, shipping invoices, tax stamps, postage stamps and various printed documents associated with the product for authentication and certification of its origin; medical prescriptions; medicines and pharmaceutical products including but not limited to cough drops, prescription drugs, antibiotics, vaccines, etc.; adulterated food, beverages, alcohol as well as coffee and chocolate; baby food and children toys; clothing, footwear and sportswear; health, skin care products, personal care and beauty aids items including perfume, cosmetics, shampoo, toothpaste, etc.; household cleaning goods; luxury goods including watches, clothing, footwear, jewellery, glasses, cigarettes and tobacco, products from leather including handbags, gloves, etc. and various objects of art; car, helicopter and airplane parts and electronic chipsets for computers, phones and consumer electronics; prepaid cards for communications or other services using similar protocol of credit recharging; computer software, video and audio tapes, CDs, DVDs and other means of multimedia data storage with music, movies and video games.

    [0195] The proposed authentication should also provide a secure link to the blockchain records.

    [0196] The invention should be considered as comprising all possible combinations of every feature described in the instant specification, appended claims, and/or drawing figures, which may be considered new, inventive and industrially applicable. In particular, other characteristics and embodiments of the invention are described in the appended claims.

    [0197] The following list enumerates all references which are cited in the above description: [0198] [1] Sviatoslav Voloshynovskiy, Oleksiy Koval, and Thierry Pun, “Secure item identification and authentication system based on unclonable features,” Patent, 22 Apr. 2014, U.S. Pat. No. 8,705,873. [0199] [2] Frederic Jordan, Martin Kutter, and Cline Di Venuto, “Means for using microstructure of materials surface as a unique identifier,” Patent, 15 Mar. 2007, WO2007/028799. [0200] [3] Kariakin Youry, “Authentication of articles,” Patent, 10 Jul. 1997, WO1997/024699. [0201] [4] Russell Paul Cowburn, “Methods and apparatuses for creating authenticatable printed articles and subsequently verifying them,” Patent, 22 Sep. 2005, WO2005/088517. [0202] [5] Chau-Wai Wong and Min Wu, “Counterfeit detection based on unclonable feature of paper using mobile camera,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 8, pp. 1885-1899, 2017. [0203] [6] Rudolf Schraml, Luca Debiasi, and Andreas Uhl, “Real or fake: Mobile device drug packaging authentication,” in Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security, New York, N.Y., USA, 2018, IHMMSec18, pp. 121-126, Association for Computing Machinery. [0204] [7] Paul Lapstun and Kia Silverbrook, “Object comprising coded data and randomly dispersed ink taggant,” Patent, 5 Mar. 2013, U.S. Pat. No. 8,387,889. [0205] [8] Riikka Arppe and Thomas Just Srensen, “Physical unclonable functions generated through chemical methods for anti-counterfeiting,” Nature Reviews Chemistry, vol. 1, no. 4, 2017. [0206] [9] Miguel R. Carro-Temboury, Riikka Arppe, Tom Vosch, and Thomas Just Sorensen, “An optical authentication system based on imaging of excitation-selected lanthanide luminescence,” Science Advances, vol. 4, no. 1, 2018. [0207] [10] Ali Valehi, Abolfazl Razi, Bertrand Cambou, Weijie Yu, and Michael Kozicki, “A graph matching algorithm for user authentication in data networks using image-based physical unclonable functions,” in 2017 Computing Conference, 2017, pp. 863-870. [0208] [11] Sviatoslav Voloshynovskiy, Maurits Diephuis, and Taras Holotyak, “Mobile visual object identifcation: from SIFT-BoF-RANSAC to SketchPrint,” in Proceedings of SPIE Photonics West, Electronic Imaging, Media Forensics and Security V, San Francisco, USA, Jan. 13, 2015. [0209] [12] Maurits Diephuis, Micro-structure based physical object identification on mobile platforms, Ph.D. thesis, University of Geneva, 2017. [0210] [13] F. Beekhof, S. Voloshynovskiy, and F. Farhadzadeh, “Content authentication and identification under informed attacks,” in Proceedings of IEEE International Workshop on Information Forensics and Security, Tenerife, Spain, Dec. 2-5 2012. [0211] [14] Maurits Diephuis, “A framework for robust forensic image identification,” M.S. thesis, University of Twente, 2010. [0212] [15] Maurits Diephuis, Svyatoslav Voloshynovskiy, Taras Holotyak, Nabil Stendardo, and Bruno Keel, “A framework for fast and secure packaging identification on mobile phones,” in Media Watermarking, Security, and Forensics 2014, Adnan M. Alattar, Nasir D. Memon, and Chad D. Heitzenrater, Eds. International Society for Optics and Photonics, 2014, vol. 9028, pp. 296-305, SPIE. [0213] [16] Justin Picard, “Digital authentication with copy-detection patterns,” in Optical Security and Counterfeit Deterrence Techniques V, Rudolf L. van Renesse, Ed. International Society for Optics and Photonics, 2004, vol. 5310, pp. 176-183, SPIE. [0214] [17] Justin Picard and Paul Landry, “Two dimensional barcode and method of authentication of such barcode,” Patent, 14 Mar. 2017, U.S. Pat. No. 9,594,993. [0215] [18] Iuliia Tkachenko and Christophe Destruel, “Exploitation of redundancy for pattern estimation of copy-sensitive two level QR code,” in 2018 IEEE International Workshop on Information Forensics and Security (WIFS), 2018, pp. 1-6. [0216] [19] Ken ichi Sakina, Youichi Azuma, and Kishi Hideaki, “Two-dimensional code authenticating device, two-dimensional code generating device, two-dimensional code authenticating method, and program,” Patent, 6 Sep. 2016, U.S. Pat. No. 9,436,852. [0217] [20] Zbigniew Sagan, Justin Picard, Alain Foucou, and Jean-Pierre Massicot, “Method and device superimposing two marks for securing documents against forgery with,” Patent, 27 May 2014, U.S. Pat. No. 8,736,910. [0218] [21] Svyatoslav Voloshynovskiy and Maurits Diephuis, “Method for object recognition and/or verification on portable devices,” Patent, 7 Oct. 2018, U.S. Pat. No. 10,019,646. [0219] [22] Thomas Dewaele, Maurits Diephuis, Taras Holotyak, and Sviatoslav Voloshynovskiy, “Forensic authentication of banknotes on mobile phones,” in Proceedings of SPIE Photonics West, Electronic Imaging, Media Forensics and Security V, San Francisco, USA, Jan. 14-18, 2016. [0220] [23] Volker Lohweg, Jan Leif Homann, Helene Drksen, Roland Hildebrand, Eugen Gillich, Jrg Hofmann, and Johannes Georg Schaede, “Authentication of security documents and mobile device to carry out the authentication,” Patent, 17 Apr. 2018, U.S. Pat. No. 9,947,163. [0221] [24] Sergej Toedtli, Sascha Toedtli, and Yohan Thibault, “Method and apparatus for proving an authentication of an original item and method and apparatus for determining an authentication status of a suspect item,” Patent, 24 Jan. 2017, U.S. Pat. No. 9,552,543. [0222] [25] Guy Adams, Stephen Pollard, and Steven Simske, “A study of the interaction of paper substrates on printed forensic imaging,” in Proceedings of the 11th ACM Symposium on Document Engineering, New York, N.Y., USA, 2011, DocEng '11, p. 263266, Association for Computing Machinery. [0223] [26] Stephen B. Pollard, Steven J. Simske, and Guy B. Adams, “Model based print signature profile extraction for forensic analysis of individual text glyphs,” in 2010 IEEE International Workshop on Information Forensics and Security, 2010, pp. 1-6. [0224] [27] Yanling Ju, Dhruv Saxena, Tamar Kashti, Dror Kella, Doron Shaked, Mani Fischer, Robert Ulichney, and Jan P. Allebach, “Modeling large-area influence in digital halftoning for electrophotographic printers,” in Color Imaging XVII. Displaying, Processing, Hardcopy, and Applications, Reiner Eschbach, Gabriel G. Marcu, and Alessandro Rizzi, Eds. International Society for Optics and Photonics, 2012, vol. 8292, pp. 259-267, SPIE. [0225] [28] Renato Villan, Sviatoslav Voloshynovskiy, Oleksiy Koval, and Thierry Pun, “Multilevel 2-d bar codes: Toward high-capacity storage modules for multimedia security and management,” IEEE Transactions on Information Forensics and Security, vol. 1, no. 4, pp. 405-420, 2006. [0226] [29] Thomas M Cover and Joy A Thomas, Elements of information theory, John Wiley & Sons, 2012. [0227] [30] Olga Taran, Slavi Bonev, and Slava Voloshynovskiy, “Clonability of anti-counterfeiting printable graphical codes: a machine learning approach,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, May 2019. [0228] [31] Rohit Yadav, Iuliia Tkachenko, Alain Trémeau, and Thierry Fournel, “Estimation of copy-sensitive codes using a neural approach,” in 7th ACM Workshop on Information Hiding and Multimedia Security, Paris, France, July 2019. [0229] [32] Burt Perry, Scott Carr, and Phil Patterson, “Digital watermarks as a security feature for identity documents,” in Optical Security and Counterfeit Deterrence Techniques III, Rudolf L. van Renesse and Willem A. Vliegenthart, Eds. International Society for Optics and Photonics, 2000, vol. 3973, pp. 80-87, SPIE. [0230] [33] Frederic Deguillaume, Sviatoslav Voloshynovskiy, and Thierry Pun, “Character and vector graphics watermark for structured electronic documents security,” Patent, 5 Jan. 2010, U.S. Pat. No. 7,644,281. [0231] [34] Pillai Praveen Thulasidharan and Madhu S. Nair, “QR code based blind digital image watermarking with attack detection code,” AEU-International Journal of Electronics and Communications, vol. 69, no. 7, pp. 1074-1084, 2015. [0232] [35] Guangmin Sun, Rui Wang, Shu Wang, Xiaomeng Wang, Dequn Zhao, and Andi Zhang, “High-definition digital color image watermark algorithm based on QR code and DWT,” in 2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA), 2015, pp. 220-223. [0233] [36] Yang-Wai Chow, Willy Susilo, Joseph Tonien, and Wei Zong, “A QR code watermarking approach based on the DWT-DCT technique,” in Information Security and Privacy, Josef Pieprzyk and Suriadi Suriadi, Eds. 2017, pp. 314-331, Springer International Publishing. [0234] [37] Xiaofei Feng and Xingzhong Ji, “A blind watermarking method with strong robust based on 2d-barcode,” in 2009 International Conference on Information Technology and Computer Science, 2009, vol. 2, pp. 452-456. [0235] [38] Weijun Zhang and Xuetian Meng, “An improved digital watermarking technology based on QR code,” in 2015 4th International Conference on Computer Science and Network Technology (ICCSNT), 2015, vol. 01, pp. 1004-1007. [0236] [39] Sartid Vongpradhip and Suppat Rungraungsilp, “QR code using invisible watermarking in frequency domain,” in 2011 Ninth International Conference on ICT and Knowledge Engineering, 2012, pp. 47-52. [0237] [40] Li Li, Ruiling Wang, and Chinchen Chang, “A digital watermark algorithm for QR code,” International Journal of Intelligent Information Processing, vol. 2, no. 2, pp. 29-36, 2011. [0238] [41] Jantana Panyavaraporn, Paramate Horkaew, and Wannaree Wongtrairat, “QR code watermarking algorithm based on wavelet transform,” in 2013 13th International Symposium on Communications and Information Technologies (ISCIT), 2013, pp. 791-796. [0239] [42] Ming Sun, Jibo Si, and Shuhuai Zhang, “Research on embedding and extracting methods for digital watermarks applied to QR code images,” New Zealand Journal of Agricultural Research, vol. 50, no. 5, pp. 861-867, 2007. [0240] [43] Rongsheng Xie, Chaoqun Hong, Shunzhi Zhu, and Dapeng Tao, “Anti-counterfeiting digital watermarking algorithm for printed QR barcode,” Neurocomputing, vol. 167, pp. 625-635, 2015. [0241] [44] Pei-Yu Lin, Yi-Hui Chen, Eric Jui-Lin Lu, and Ping-Jung Chen, “Secret hiding mechanism using QR barcode,” in 2013 International Conference on Signal-Image Technology Internet-Based Systems, 2013, pp. 22-25. [0242] [45] Ari Moesriami Barmawi and Fazmah Arif Yulianto, “Watermarking QR code,” in 2015 2nd International Conference on Information Science and Security (ICISS), 2015, pp. 1-4. [0243] [46] Thach V. Bui, Nguyen K. Vu, Thong T.P. Nguyen, Isao Echizen, and Thuc D. Nguyen, “Robust message hiding for QR code,” in 2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2014, pp. 520-523. [0244] [47] Iuliia Tkachenko, William Puech, Christophe Destruel, Olivier Strauss, Jean-Marc Gaudin, and Christian Guichard, “Two-level QR code for private message sharing and document authentication,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 3, pp. 571-583, 2016. [0245] [48] Iu. Tkachenko, W. Puech, O. Strauss, C. Destruel, and J.-M. Gaudin, “Printed document authentication using two level or code,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 2149-2153. [0246] [49] Yugiao Cheng, Zhengxin Fu, Bin Yu, and Gang Shen, “A new two-level QR code with visual cryptography scheme,” Multimedia Tools and Applications, vol. 77, no. 16, pp. 2062920649, 2018. [0247] [50] H. Phuong Nguyen, Agns Delahaies, Florent Retraint, D. Huy Nguyen, Marc Pic, and Frederic Morain-Nicolier, “A watermarking technique to secure printed QR codes using a statistical test,” in 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2017, pp. 288-292. [0248] [51] Hoai Phuong Nguyen, Florent Retraint, Frdric Morain-Nicolier, and Angs Delahaies, “A watermarking technique to secure printed matrix barcode application for anti-counterfeit packaging,” IEEE Access, vol. 7, pp. 131839-131850, 2019. [0249] [52] Tailing Yuan, Yili Wang, Kun Xu, Ralph R. Martin, and Shi-Min Hu, “Two-layer QR codes,” IEEE Transactions on Image Processing, vol. 28, no. 9, pp. 4413-4428, 2019. [0250] [53] Martin Kutter, Sviatoslav V. Voloshynovskiy, and Alexander Herrigel, “Watermark copy attack,” in Security and Watermarking of Multimedia Contents II, Ping Wah Wong and Edward J. Delp III, Eds. International Society for Optics and Photonics, 2000, vol. 3971, pp. 371-380, SPIE. [0251] [54] Frederic Jordan, Martin Kutter, and Nicolas Rudaz, “Method to apply an invisible mark on a media,” Patent, 21 Jun. 2011, U.S. Pat. No. 7,965,862. [0252] [55] Alastair Reed, Tom′aA.° .sub.i Filler, Kristyn Falkenstern, and Yang Bai, “Watermarking spot colors in packaging,” in Media Watermarking, Security, and Forensics 2015, March 2015, vol. 9409 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pp. 940-906. [0253] [56] Svyatoslav Voloshynovskiy, “Method for active content fingerprinting,” Patent, 17 Oct. 2017, U.S. Pat. No. 9,794,067. [0254] [57] Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel, “Adversarial attacks on neural network policies,” 2017. [0255] [58] Nicholas Carlini and David Wagner, “Towards evaluating the robustness of neural networks,” in 2017 IEEE Symposium on Security and Privacy (SP), 2017, pp. 39-57. [0256] [59] Anh Thu Phan Ho, Bao An Hoang Mai, Wadih Sawaya, and Patrick Bas, “Document Authentication Using Graphical Codes: Impacts of the Channel Model,” in ACM Workshop on Information Hiding and Multimedia Security, Montpellier, France, June 2013, pp. ACM 978-1-4503-2081-8/13/06. [0257] [60] Christopher M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics), Springer-Verlag, Berlin, Heidelberg, 2006. [0258] [61] Kevin P. Murphy, Machine learning: a probabilistic perspective, MIT Press, Cambridge, Mass. [u.a.], 2013. [0259] [62] Ian J. Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, Cambridge, Mass., USA, 2016, url http://www.deeplearningbook.org. [0260] [63] Olga Taran, Slavi Bonev, Taras Holotyak, and Slava Voloshynovskiy, “Adversarial detection of counterfeited printable graphical codes: Towards “adversarial games” in physical world,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 2812-2816. [0261] [64] Ashlesh Sharma, Lakshminarayanan Subramanian, and Yiduth Srinivasan, “Authenticating physical objects using machine learning from microscopic variations,” Patent application, 2 Feb. 2017, U.S. patent application Ser. No. 15/302,866. [0262] [65] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio, “Adversarial examples in the physical world,” CoRR, vol. abs/1607.02533, 2016. [0263] [66] Neil. A. Macmillan and C. Douglas. Creelman, Detection Theory: A user's guide, Lawrence Erlbaum Associates, Mahwah, N.J., London, 2005. [0264] [67] H. Vincent Poor, An Introduction to Signal Detection and Estimation, Springer-Verlag, Berlin, Heidelberg, 2013. [0265] [68] Diederik P. Kingma and Max Welling, “Auto-encoding variational bayes,” in 2nd International Conference on Learning Representations, ICLR 2014, Ban, AB, Canada, Apr. 14-16, 2014, Conference Track Proceedings, 2014. [0266] [69] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow, “Adversarial autoencoders,” CoRR, vol. abs/1511.05644, 2015. [0267] [70] Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Muller, and Marius Kloft, “Deep one-class classification,” in Proceedings of the 35th International Conference on Machine Learning, Jennifer Dy and Andreas Krause, Eds. 10-15 Jul. 2018, vol. 80 of Proceedings of Machine Learning Research, pp. 4393-4402, PMLR. [0268] [71] Mohammad Sabokrou, Mohammad Khalooei, Mahmood Fathy, and Ehsan Adeli, “Adversarially learned one-class classifier for novelty detection,” in CVPR. 2018, pp. 3379-3388, IEEE Computer Society. [0269] [72] Mohammadreza Salehi, Ainaz Eftekhar, Niousha Sadjadi, Mohammad Hossein Rohban, and Hamid R. Rabiee, “Puzzle-AE: Novelty detection in images through solving puzzles,” CoRR, vol. abs/2008.12959, 2020. [0270] [73] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778. [0271] [74] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention MICCAI 2015, May 2015. [0272] [75] Paul Bergmann, Kilian Batzner, Michael Fauser, David Sattlegger, and Carsten Steger, “The mvtec anomaly detection dataset: A comprehensive real-world dataset for unsupervised anomaly detection,” International Journal of Computer Vision, vol. 129, no. 4, pp. 1038-1059, 2021. [0273] [76] Nobuyuki Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979. [0274] [77] Yunqiang Chen, Xiang Sean Zhou, and T.S. Huang, “One-class SVM for learning in image retrieval,” in Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 2001, vol. 1, pp. 34-37 vol. 1. [0275] [78] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm, “Mutual information neural estimation,” in Proceedings of the 35th International Conference on Machine Learning, Jennifer Dy and Andreas Krause, Eds. 10-15 Jul. 2018, vol. 80 of Proceedings of Machine Learning Research, pp. 531-540, PMLR. [0276] [79] Slava Voloshynovskiy, Mouad Kondah, Shideh Rezaeifar, Olga Taran, Taras Hotolyak, and Danilo Jimenez Rezende, “Information bottleneck through variational glasses,” in NeurIPS Workshop on Bayesian Deep Learning, Vancouver, Canada, December 2019. [0277] [80] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems —Volume 2, Cambridge, Mass., USA, 2014, NIPS' 14, p. 26722680, MIT Press. [0278] [81] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori, Density Ratio Estimation in Machine Learning, Cambridge University Press, USA, 1st edition, 2012. [0279] [82] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka, “f-gan: Training generative neural samplers using variational divergence minimization,” 2016. [0280] [83] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori, “Density-ratio matching under the Bregman divergence: a unifed framework of density-ratio estimation,” Annals of the Institute of Statistical Mathematics, vol. 64, no. 5, pp. 1009-1044, October 2012. [0281] [84] Martin Arjovsky, Soumith Chintala, and Lon Bottou, “Wasserstein Generative Adversarial Networks,” 2017, cite arxiv:1701.07875. [0282] [85] Kacper Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, and Arthur Gretton, “Fast two-sample testing with analytic representations of probability measures,” in Proceedings of the 28th International Conference on Neural Information Processing Systems—Volume 2, Cambridge, Mass., USA, 2015, NIPS'15, p. 19811989, MIT Press. [0283] [86] Kilian Q Weinberger, John Blitzer, and Lawrence Saul, “Distance metric learning for large margin nearest neighbor classification,” in Advances in Neural Information Processing Systems, Y Weiss, B. Scholkopf, and J. Platt, Eds. 2006, vol. 18, MIT Press. [0284] [87] Kihyuk Sohn, “Improved deep metric learning with multi-class N-pair loss objective,” in Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds. 2016, vol. 29, Curran Associates, Inc. [0285] [88] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton, “A simple framework for contrastive learning of visual representations,” in Proceedings of the 37th International Conference on Machine Learning, Hal Daum III and Aarti Singh, Eds. 13-18 Jul. 2020, vol. 119 of Proceedings of Machine Learning Research, pp. 1597-1607, PMLR. [0286] [89] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan, “Supervised contrastive learning,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds. 2020, vol. 33, pp. 18661-18673, Curran Associates, Inc. [0287] [90] Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian, “Practical poissonian-gaussian noise modeling and fitting for single-image raw-data,” IEEE Transactions on Image Processing, vol. 17, no. 10, pp. 1737-1754, 2008. [0288] [91] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, and Alexander Smola, “A kernel two-sample test,” J. Mach. Learn. Res., vol. 13, no. null, pp. 723773, March 2012. [0289] [92] George Papamakarios, Eric T. Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan, “Normalizing flows for probabilistic modeling and inference,” CoRR, vol. abs/1912.02762, 2019. [0290] [93] Conor Durkan, Artur Bekasov, lain Murray, and George Papamakarios, “Neural spline flows,” in Advances in Neural Information Processing Systems. 2019, vol. 32, Curran Associates, Inc.