OBJECT AUTHENTICATION USING DIGITAL BLUEPRINTS AND PHYSICAL FINGERPRINTS
20220398348 · 2022-12-15
Inventors
- Svyatoslav VOLOSHYNOVSKIY (Vesénaz, CH)
- Olga Taran (Petit-Lancy, CH)
- Joakim TUTT (Petit-Lancy, CH)
- Taras HOLOTYAK (Genève, CH)
Cpc classification
G06F18/214
PHYSICS
G06F18/217
PHYSICS
G06F18/2433
PHYSICS
G06F21/64
PHYSICS
G06V20/80
PHYSICS
International classification
G06F21/64
PHYSICS
Abstract
A method of object authentication based on digital blueprints and physical fingerprints comprising the steps of acquiring a set of training blueprints and fingerprints, training, object enrollment and object authentication. The method uses a pair of a mapper realized as an encoder and a decoder and a set of multi-metric scores originating from the decomposition of mutual information and applied to both the output of the encoder and decoder and producing a feature vector for a one-class classifier. The method is trained only on the original physical objects and does not use any fakes for reliable authentication.
Claims
1. A method for authentication of physical objects, in particular of physical batch objects, adapted to be produced based on a set of blueprints {t.sub.i}.sub.i=1.sup.N comprising, at a training stage, the steps of: acquiring a set of training signals representing fingerprints {x.sub.i}.sub.i=1.sup.N of said physical objects; providing a mapper adapted to perform an operation on said fingerprints {x.sub.i}.sub.i=1.sup.N and a classifier adapted to be applied to at least one similarity score; training said mapper and/or said classifier on said set of training signals {x.sub.i}.sub.i=1.sup.N to obtain a learned mapper and/or a learned classifier; enrolling objects to be protected; and at an authentication stage, the steps of: acquiring a probe signal y from a physical object to be authenticated; applying the mapper and classifier, at least one of which is learned, to the probe signal y; producing a decision about authenticity of the physical object represented by the probe signal y based on output of the classifier, wherein said mapper is adapted to produce an estimate t.sub.i of blueprint t.sub.i based on fingerprint x.sub.i and said classifier is a one-class classifier, and the method further comprises, at said training stage, the steps of: acquiring said set of blueprints {t.sub.i}.sub.i=1.sup.N comprising at least one blueprint t.sub.i, providing a set of similarity scores for said blueprints; performing a joint training of said mapper and one-class classifier on said set of training signals {x.sub.i}.sub.i=1.sup.N and said set of blueprints {t.sub.i}.sub.i=1.sup.N to obtain a jointly learned mapper and/or a jointly learned one-class classifier; and at said authentication stage, the method further comprises the steps of: applying jointly the mapper and one-class classifier, at least one of which is learned, to the probe signal y; and said output of the one-class classifier allowing a decision about authenticity of the object represented by the probe signal y, being produced based on fingerprints x.sub.i and/or blueprints t.sub.i.
2. The method according to the claim 1, wherein: said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N represent paired data, said mapper is realized by a hand-crafted mapper or by a learnable mapper, and said output of the one-class classifier allowing a decision about authenticity of the object represented by the probe signal y, being produced based on blueprints t.sub.i.
3. The method according to claim 1, further comprising, at said training stage, the step of providing a set of similarity scores for said fingerprints, and wherein: said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N represent paired data, said mapper is realized by a hand-crafted mapper or by a learnable mapper, and said output of the one-class classifier allowing a decision about authenticity of the object represented by the probe signal y, being produced based on fingerprints x.sub.i and blueprints t.sub.i.
4. The method according to claim 1, wherein said mapper is realized by an encoder adapted to produce an estimate {tilde over (t)}.sub.1 of blueprint t.sub.i based on fingerprint x.sub.i and the method further comprises, at said training stage, the steps of: providing, next to said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N representing a set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J representing unpaired data; providing a decoder adapted to produce an estimate {circumflex over (x)}.sub.l of fingerprint x.sub.i based on blueprint t.sub.i, a set of similarity scores for said fingerprints, fingerprint discriminators and blueprint discriminators; performing a joint training of said encoder, decoder, fingerprint discriminators and blueprint discriminators and one-class classifier on said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N and/or said unpaired blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J to obtain a jointly learned encoder, decoder, fingerprint and blueprint discriminators and one-class classifier, with said encoder and decoder being trained in a direct way x.fwdarw.{tilde over (t)}.fwdarw.{circumflex over (x)} from fingerprints x.sub.i of said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N; and at said authentication stage, the steps of: applying jointly the learned encoder, decoder, fingerprint discriminators and blueprint discriminators and one-class classifier to the probe signal y; said output of the one-class classifier, allowing a decision about authenticity of the object represented by the probe signal y, being produced based on blueprints t.sub.i.
5. The method of claim 1, wherein said mapper is realized by an encoder adapted to produce an estimate {circumflex over (t)}.sub.l of blueprint t.sub.i based on fingerprint {tilde over (x)}.sub.l and the method further comprises, at said training stage, the steps of providing, next to said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N representing a set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J representing unpaired data, providing a decoder adapted to produce an estimate {tilde over (x)}.sub.l of fingerprint x.sub.i based on blueprint t.sub.i, a set of similarity scores for said fingerprints, fingerprint discriminators and blueprint discriminators, performing a joint training of said encoder, decoder, fingerprint discriminators and blueprint discriminators and one-class classifier on said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N and/or said unpaired blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J to obtain a jointly learned encoder, decoder, fingerprint and blueprint discriminators and one-class classifier, with said encoder and decoder being trained in a reverse way t.fwdarw.{tilde over (x)}.fwdarw.{circumflex over (t)} from blueprints t.sub.i of said set of pairs of blueprints and fingerprints {t.sub.i i, x.sub.i}.sub.i=1.sup.N, and at said authentication stage, the steps of: applying jointly the learned decoder, fingerprint discriminators and one-class classifier to the probe signal y, said output of the one-class classifier, allowing a decision about authenticity of the object represented by the probe signal y, being produced based on blueprints t.sub.i.
6. The method according to claim 1, wherein said mapper is realized by an encoder adapted to produce an estimate {tilde over (t)}.sub.1 of blueprint t.sub.i based on fingerprint x.sub.i and the method further comprises, at said training stage, the steps of: providing, next to said fingerprints {x.sub.i}.sub.i=1.sup.N and blueprints {t.sub.i}.sub.i=1.sup.N representing a set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J representing unpaired data, providing a decoder adapted to produce an estimate {tilde over (x)}.sub.l of fingerprint x.sub.i based on blueprint t.sub.i, a set of similarity scores for said fingerprints, fingerprint discriminators and blueprint discriminators, performing a joint training of said encoder, decoder, fingerprint discriminators and blueprint discriminators and one-class classifier on said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N and/or said unpaired blueprints {t.sub.j}.sub.j=1.sup.J and fingerprints {x.sub.j}.sub.j=1.sup.J to obtain a jointly learned encoder, decoder, fingerprint and blueprint discriminators and one-class classifier, with said encoder and decoder being trained in two ways combining a direct way x.fwdarw.{tilde over (t)}.fwdarw.{circumflex over (x)} from fingerprints x.sub.i of said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N and a reverse way t.fwdarw.{tilde over (x)}.fwdarw.{circumflex over (t)} from blueprints t.sub.i of said set of pairs of blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, and at said authentication stage, the steps of: applying jointly the learned encoder, decoder, fingerprint discriminators, and blueprint discriminators and one-class classifier to the probe signal y, said output of the one-class classifier, allowing a decision about authenticity of the object represented by the probe signal y, being produced based on fingerprints x.sub.i and/or blueprints t.sub.i.
7. The method according to claim 1, wherein said authentication is performed only from the blueprint t.sub.i or only from the fingerprint x.sub.i or jointly from the blueprint-fingerprint pair (t.sub.i, x.sub.i) representing an object with the index i.
8. The method according to claim 1, wherein said sets of similarity scores for said fingerprints and/or for said blueprints are each sets of multi-metric similarity scores measuring similarity and comprising each at least two metrics chosen from the group comprising Euclidean l.sub.2-norm, l.sub.1-norm or generally l.sub.p-norm, Pearson correlation, Hamming distance, moment matching and embedded space distances.
9. The method according to claim 1, wherein output of said multi-metric similarity scores is concatenated into a feature vector and serves as an input to said one-class classifier which is implemented as a kernel one-class SVM, deep classifier or as a hand-crafted decision rule bounding said set of multi-metric similarity scores from those of hypothetical fakes.
10. The method according to claim 4, wherein said discriminators are realized as criteria implementing density ration estimation, MMD, Wasserstein or generalized divergences, U-net with a latent and output space used as a global and local discrimination, the discriminators being applied to the entire data or only to its parts having further aggregation of scores of local discriminators.
11. The method according to claim 4, wherein said encoder and decoder are represented as deep neural networks and implemented based on U-net architecture, several down sampling convolutional layers followed by several ResNet layers and several up sampling convolutional layers, invertable networks such as normalizing FLOWS or similar ones and also by injecting the randomization.
12. The method according to claim 4, wherein said encoder, decoder and discriminators are trained from target samples x.sub.1, . . . , x.sub.K representing examples of printing and acquisition or fakes.
13. The method according to claim 4, wherein said encoder, decoder and discriminators are trained from parameters defining printer settings encoded by the encoder and phone settings encoded by the encoder.
14. The method according to claim 4, wherein said encoder and decoder are trained on paired data of blueprints-fingerprint {t.sub.i,x.sub.i}.sub.i=1.sup.N with similarity scores for intra-modal encodings of fingerprint-to-blueprint and blueprint-to-fingerprint and on unpaired data of blueprints {t.sub.i}.sub.i=1.sup.N and fingerprint with discriminators estimating the proximity of synthetic data to real data of corresponding modalities.
15. The method according to claim 4, wherein said learned encoder and decoder are used to produce synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N in a controlled proximity to the manifold of original fingerprints {x.sub.i}.sub.i=1.sup.N from said set of blueprints {t.sub.i}.sub.i=1.sup.N, the synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N being used to train the classifier.
16. The method according to claim 15, wherein said synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N are within boundaries of the manifold of original fingerprints {x.sub.i}.sub.i=1.sup.N and said one-class classifier is trained on an augmented training set comprising both original fingerprints {x.sub.i}.sub.i=1.sup.N and synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N, or the synthetic samples {x}.sub.i=1.sup.N are in close proximity to the manifold of original fingerprints {x.sub.i}.sub.i=1.sup.N and the synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N are considered as the worst case fakes to original fingerprints {x.sub.i}.sub.i=1.sup.N and the classifier is trained in a supervised way.
17. The method according to claim 15, wherein said synthetic samples {{tilde over (x)}.sub.l}.sub.i=1.sup.N generated in close proximity to the manifold of original fingerprints {x.sub.i}.sub.i=1.sup.N are considered as synthetic fakes representing non-authentic samples and the encoder and decoder are trained to maximize the mutual information for the encoder path and encoder-decoder path encoding the original blueprints and fingerprints while performing the corresponding minimization of mutual information for the same encoder-decoder while operating on synthetic fakes.
18. The method according to claim 1, wherein said blueprints {t.sub.i}.sub.i=1.sup.N are secured and kept secret by an adequate securing process at the stage of generating, providing or acquiring the blueprints {t.sub.i}.sub.i=1.sup.N, the securing process being chosen from the group comprising of modulation by use of a secret key k, modulation by use of a secret mapping, modulation by use of a space of secret carriers of a transform domain, such as to produce modulated blueprints {t.sub.i.sup.s}.sub.i=1.sup.N.
19. The method according to claim 1, wherein said blueprints {t.sub.i}.sub.i=1.sup.N are provided by a party manufacturing and/or commercializing the objects to be authenticated and/or are acquired by a party supposed to provide for the authentication, in particular by examining the objects to be authenticated in a non-invasive manner.
20. A use of a method according to claim 1 for an application chosen from the group comprising protection of documents, certificates, contracts, identity documents, packaging, secure labels, stickers, banknotes, checks, luxury goods such as watches, precious stones and metals, diamonds, electronics, chips, holograms, medicaments, cosmetics, food, spare and medical parts and components and providing the related object tracking and tracing, circulation and delivery monitoring and connectivity to the corresponding database records including blockchain.
21. A use of a method according to claim 1 for an application chosen from the group comprising generation of synthetic samples of printed objects acquired under various imaging conditions and simulation of various distortions in imaging systems that include the noise in RAW images under various settings of imaging device, geometric aberrations and defocusing.
22. A use of a method according to claim 1 for authentication of a person from a probe signal y with respect to a reference blueprint t.sub.i reproduced on an identification document of said person or stored in some electronic form in private or public domain and a fingerprint x.sub.i representing reference biometric data of said person.
23. A use of a method according to claim 1 for an application chosen from the group comprising automatic quality estimation of newly produced objects and anomaly detection during manufacturing.
24. A device adapted for the implementation of the method according to claim 1, wherein the device is chosen from the group comprising a mobile phone, a smart phone equipped with a camera, a digital photo apparatus, a barcode reader equipped with a camera, a scanning device, a portable microscope connected to any portable device equipped with communication capability.
25. A tangible computer-readable medium storing instructions that, when executed by a computer, cause it to implement the method according to claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0081] The attached figures exemplary and schematically illustrate the principles as well as several embodiments of the present invention.
[0082]
[0083]
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
[0097]
[0098]
[0099]
[0100]
[0101]
[0102]
[0103]
[0104]
DETAILED DESCRIPTION OF THE INVENTION
[0105] In the following, the invention shall be described in detail with reference to the above mentioned figures.
[0106] The present invention relates to a method of physical object authentication based on blueprints and/or fingerprints. As already mentioned before, the following description, in general, will concentrate on the consideration of the method according to the present invention when used for the above mentioned authentication problem under the lack of training samples representing a class of fakes, and will only highlight and exemplify the differences and extensions to the classification in the presence of the synthetic fakes. However, the method according to the present invention may also comprise a training step using training samples representing a class of fakes, such that this conventional use case, of course, is possible also within the proposed method, though not described in detail.
[0107] According to the present invention, the authentication method typically comprises a mapper which is preferably realized by an encoder, optionally a decoder, multi-metrics and a classifier, which are trained in a way described in the following to achieve accurate and efficient authentication. Accordingly, the method according to the present invention comprises two main stages: A first stage includes joint training of the mapper, respectively of the encoder-decoder pair, according to the specified objectives to produce a vector of multi-metric features used for the classifier, which is trained on these features. The training stage also includes enrollment of new objects. A second stage includes an authentication step of the enrolled objects, represented by their blueprints t.sub.i and/or fingerprints x.sub.i. Said authentication step forms a testing stage comprising production of a decision about the authenticity of an object to be authenticated and represented by a probe signal y acquired from this object using some imaging device with respect to a reference blueprint t.sub.i, a reference fingerprint x.sub.i and/or jointly a blueprint-fingerprint pair (t.sub.i,x.sub.i) representing an enrolled authentic physical object.
[0108] The blueprint t.sub.i may, for example, be provided by the party manufacturing and/or commercialising the objects to be authenticated or, alternatively, may also be acquired by the party supposed to provide for or to perform the authentication, possibly by examining the objects to be authenticated and preferably in non-invasive manner, e.g. by determining/measuring the parameters/features/dimensions of original objects to be authenticated without disassembling these objects. The fingerprint x.sub.i, like mentioned already in the introduction, typically represents individual marking means which are voluntarily introduced and/or are inadvertantly present in the objects to be authenticated.
[0109]
[0110] In a general case, the authentication of a physical object can be performed based on the verification of correspondence between the features of original objects and those of fakes ones. Assuming a special case where the original objects are represented by the fingerprints {x.sub.i}.sub.i=1.sup.N and the fakes by the fingerprints {f.sub.i}.sub.i=1.sup.N for a particular application of interest with the fixed parameters, one can train a supervised classifier to address the authentication problem. It is important to note that: (a) the class of originals and the class of fake objects are represented by the corresponding sets and there is no direct correspondence between a pair x.sub.i and f.sub.i, i. e. the classifier does not use the information from which fingerprint x.sub.i the fake f.sub.i is produced from and (b) the classification is not based on a verification of proximity of the probe y to a particular claimed object with the index i represented by its fingerprint x.sub.i and blueprint t.sub.i. At the authentication stage, the trained classifier should decide whether the probe y is closer to the class of the originals or fakes. This sort of authentication might be considered as a sort of generic forensic verification as considered in [63].
[0111] Furthermore, in case the defender that trains the classifier knows the correspondence between the triplets {x.sub.i,x′.sub.if.sub.i}.sub.i=1.sup.N, the authentication process can be organized as shown in
[0112] Once the embedding operator g.sub.θ.sub.
[0113]
[0114] Such authentication process is intuitively simple, but represents at the same time a number of practical restrictions and faces several issues: [0115] 1) The fakes are rarely available at the training stage and the training on the original fingerprints only while interpreting them as opposite to fakes might lead in some cases to a wrong embedding space and vulnerability to fakes that are closer to the originals than the supposed original opponents; [0116] 2) even if some fakes of original objects were available at the training time, the actual fakes produced by the counterfeiter might be considerably different at test time from those used for the training; [0117] 3) the considered authentication is solely based on the availability of the fingerprint x.sub.i for each physical object which might be not the case in real applications due to high cost of the fingerprint enrollment for large scale applications, infrastructure and database management or other technical constraints; [0118] 4) the scheme presented in
[0120] Therefore, both systems considered in
[0121] To resolve these challenges, the method of physical object authentication according to the present invention is based on the fact that the presence of paired fake examples {x.sub.i, f.sub.i}.sub.i=1.sup.N or even unpaired examples {x.sub.i}.sub.i=1 and {f.sub.i}.sub.i=1.sup.N is not required at the training stage. This makes the proposed method better applicable to practical requirements and makes it more robust to potential broad variability and classes of fakes and attacks. Furthermore, to make the authentication better adaptable to the real cases where only blueprints {t.sub.i}.sub.i=1.sup.N are available for the authentication, which does not require complex enrollment of fingerprints from each physical object under the protection, or both some paired blueprints and fingerprints {t.sub.i,x.sub.i}.sub.i=1.sup.N, if such an option exists in some medium and small scale applications, or even unpaired sets of blueprints {t.sub.i}.sub.i=1.sup.N and some number of fingerprints {x.sub.j}.sub.j=1.sup.J are available at the training, an authentication is considered that can incorporate all these situations as particular cases without a need to change the authentication architecture for each particular case. This makes the proposed architecture universal and scalable to various practical use cases.
[0122] To this effect, at first place, an authentication method according to the present invention based only on the blueprint t.sub.i will be described in the following and, at second place, it will then be extended to the above cases, along with the introduction of the corresponding architecture and by addressing each case in details.
[0123]
[0124] It should be pointed out that the mapper (200) resembles some similarity with the mapper (100) in
[0125] In the case of learnable mapper (200) one can target to minimize the multi-metric (230) at the training stage and then to use the trained mapper to train the classifier (250). The testing stage is the same as described above. The role and importance of mapper (200) training will be further demonstrated in
[0126]
[0127] To emphasize the importance of the mapper (200) and the role of multi-metrics in the methods according to
[0128] The authentication with respect to the reference blueprint t.sub.i is shown in
[0129]
[0130]
[0131] Several multi-metrics were used and the 2D plots demonstrate the achieved separability between the classes of the original objects and four types of considered fakes in these metrics. Even under simple copy machine fakes none of pairs of the considered metrics is capable to produce a reliable separability between the classes for both the authentication based on the reference blueprint and fingerprint. In this particular case, it was found experimentally, that the pair of the Pearson correlation between the blueprint t.sub.i and probe y and the Hamming distance between the blueprint t.sub.i and binary quantized probe T.sub.Otsu(y) with the mapper (200) implemented via the Otsu thresholding selection produced the best among all considered cases yet imperfect separability between the classes. That is why this pair was chosen to exemplify the decision boundaries of the OC-classifier (250) trained on the pair of these metrics as shown in
[0132] To resolve the problem of inability of the above considered classification criteria to reliably distinguish the fakes from the originals, the present invention discloses a method where the mapping between the fingerprints and blueprints is based on a trainable model. Moreover, instead of a simple minimization of multi-metrics between the estimated and reference blueprints to train the mapper, we will consider a generalized approach where the above strategy is just a particular case. Along this way, it is assumed that the blueprints and fingerprints are statistically dependent and governed from a joint distribution p(t, x). This assumption is easily justified in practice due to the processing chain presented in
Direct Path System
[0133] The direct path authentication system is shown in
[0134] The reconstruction of the fingerprint is performed according to the metrics defined in the module (500) that consists of similar paired similarity metrics (221) and unpaired discriminators (220) defining the approximation to mutual information term I.sub.E,D(T; X). The implementation of the metrics is similar as above.
[0135] At the stage 2, the outputs of all paired and unpaired metrics are concatenated into a feature vector that serves as an input to the OC-classifier (250). It should be pointed out that complexity of the training of the OC-classifier based on the proposed feature vector in the proposed system is considerably lower as compared to training in methods based on the direct classification of samples in the high-dimensional space of input data. The training of classifiers in the input space is highly complex, even when using the dual form of representation in systems based on the OC-SVM. Therefore, by considering the input of the OC-classifier (250) as a concatenated vector of multi-metric scores, one can considerably reduce the complexity of the classifier training in the domain where the classes are better separable.
[0136] The direct system is based on the encoder (200) and the decoder (210) that are trained jointly based on the maximization of two mutual information terms I.sub.E(X; T) and I.sub.E,D(T; X). The first term I.sub.E (X; T) represented by (400) and denotes the mutual information between the fingerprint and blueprint considered via the encoder (200) and decomposed as [79]:
I.sub.E(X;T): =−D.sub.KL(p.sub.Data(t)∥p.sub.E(t);E)−H.sub.E(T|X), (2)
where D.sub.KL(p.sub.Data(t)∥p.sub.E(t); E)=D.sup.t{tilde over (t)}({tilde over (t)}) denotes the Kullback-Leibler divergence (KLD) between the blueprint data distribution p.sub.Data(t) and encoded one P.sub.E (t). We will refer to the KLD as a discriminator between two distributions p.sub.Data(t) and P.sub.E(t). The discriminator estimates the proximity of two distributions represented by the samples {{tilde over (j)}} and {t.sub.j}.sub.j=1.sup.J generated from these distributions and can be implemented based on: (a) class probability estimation based on the density ratio estimation [80], (b) divergence minimization [82] and (c) ratio matching [83] or alternatively based on moment matching implemented based on kernels and known as the maximum mean discrepancy [101].
[0137] The term H.sub.E(T|X)=−E.sub.p(t,x)[log q.sub.E(t|x)] denotes the conditional entropy, where E.sub.p(t,x) [.Math.] denotes the mathematical expectation with respect to the distribution p(t, x). We define q.sub.E(t|x)∝e.sup.−λ.sup.
[0138] The second term I.sub.E,D(T; X) represented by (500) denotes the mutual information between the encoded blueprint and targeted fingerprint considered via the encoder (200) and decoder (210) and decomposed as:
I.sub.E,D(T;X):=−D.sub.KL(p.sub.Data(x)∥p.sub.D(x);E,D)−H.sub.E,D(X|T), (3)
[0139] where D.sub.KL(p.sub.Data(x)∥p.sub.D(x); E, D.sup.x{circumflex over (x)}({circumflex over (x)}), denotes the Kullback-Leibler divergence between the image fingerprint distribution p.sub.Data(X) and its reconstructed counterpart p.sub.D (x). The second term in the above decomposition is H.sub.E,D(X|T)=−E.sub.p.sub.
[0140] The direct path architecture training problem is based on maximization problem:
that consists in finding the parameters of the encoder and decoder (Ê, {circumflex over (D)}) with λ denoting the Lagrangian multiplier controlling the trade-off between the two terms.
[0141] The maximization problem (4) is reduced to a minimization problem using (2) and (3):
where Λ.sup.x(E,D)=[D.sup.t{tilde over (t)}({tilde over (t)})+λ.sub.t{tilde over (t)}d.sup.t(t, {tilde over (t)})]+λ[D.sup.x{circumflex over (x)}({circumflex over (x)})+λ.sub.x{circumflex over (x)}d.sup.x(x,{circumflex over (x)})].
[0142] The discriminators (231) and (221) are fixed at the above training. Once the parameters of the encoder and decoder are estimated at this stage the discriminators are updated and the next epoch of training is repeated.
[0143] Once the encoder-decoder and discriminators are trained, the multi-metrics scores are used for the OC-classifier (250) training that finalizes the training of the direct path.
[0144] At the testing stage of the direct path such as shown in
[0145] It should be pointed out that the direct system presented is a generalization of system presented in
Reverse Path System
[0146]
[0147] The reverse system is based on the encoder (200) and the decoder (210) that are trained jointly based on the maximization of two mutual information terms I.sub.D(X; T) and I.sub.D,E(T;X). It represents a reversed version of the direct system. The first term I.sub.D (X; T) represented by (600) denotes the mutual information between the fingerprint and blueprint considered via the decoder (210) and decomposed as [79]:
I.sub.D(X;T):=−D.sub.KL(p.sub.Data(x)∥p.sub.D(x);D)−H.sub.D(X|T), (6)
where D.sub.KL (p.sub.Data(X)∥p.sub.D (x); D)=D.sup.x{tilde over (x)}({tilde over (x)}) denotes the Kullback-Leibler divergence between the fingerprint data distribution p.sub.Data(X) and decoded one p.sub.D(x). The term H.sub.D(X|T)=−E.sub.p(t,x) [log p.sub.D (x|t)] denotes the conditional entropy, where E.sub.p(t,x) [.Math.] denotes the mathematical expectation with respect to the distribution p(t, x). We define P.sub.D(x|t)∝e.sup.−λ.sup.
[0148] The second term I.sub.D,E (T; X) represented by (700) denotes the mutual information between the decoded fingerprint and targeted blueprint considered via the decoder (210) and encoder (200) and decomposed as:
I.sub.D,E(T;X):=−D.sub.KL(p.sub.Data(t)∥p.sub.E(t);E,D)−H.sub.D,E(T|X), (7)
where D.sub.KL(p.sub.Data(t)∥p.sub.E(t); E, D)=D.sup.t{tilde over (t)}({circumflex over (t)}), denotes the Kullback-Leibler divergence between the image fingerprint distribution p.sub.Data(t) and its reconstructed counterpart PE(t) with {circumflex over (t)}=f.sub.E(g.sub.D(t)).
[0149] The second term H.sub.D,E (T|X)=−E.sub.p.sub.
[0150] The reverse path training problem is based on maximization problem:
[0151] The maximization problem (8) is reduced to a minimization problem using (6) and (7):
where κ.sup.t(E,D)=[D.sup.x{tilde over (x)}({tilde over (x)})+λ.sub.x{tilde over (x)}d.sup.x(x,{tilde over (x)})]+α[D.sup.t{circumflex over (t)}({circumflex over (t)})+λ.sub.t{circumflex over (t)}d.sup.t(t, {circumflex over (t)})], where α is a Lagrangian coefficient.
[0152] Once the encoder-decoder and discriminators are trained, the multi-metrics scores are used for the OC-classifier (250) training that finalizes the training of the reverse path.
[0153] At the testing stage of the direct path such as shown in
Two-Way System
[0154] The two-way system is based on the direct and reverse paths and shown in
Λ.sup.Two-way(E,D)=Λ.sup.x(E,D)+βΛ.sup.t(E,D), (10)
where β denotes the Lagrangian multiplier, as a combination of the direct and reverse objectives. At the same time, the two-way system is not just a sole combination of the previous objectives. It also includes the cross terms from two modalities. It can be formalized as a decomposition of 6 terms of mutual information as pointed out above. Therefore, the mutual information includes: four terms considered in the direct and reverse parts, i.e. I.sub.E(X; T), I.sub.E,D(T; X), I.sub.D(X; T) and I.sub.D,E (T; X), and two cross-terms between the direct and reverse parts denoted as I.sub.E,D,D(T; X) and I.sub.D,E,E (T; X). Each mutual information is decomposed into two terms including the paired similarity metric and discriminator term computed using multi-metric approximations similar to the direct and reverse systems.
[0155] The authentication stage represents the testing and is shown in
[0156] Assuming that a fake originates from the HC and ML attacks targeting the estimation of t.sub.i from x.sub.i with the further accurate reproduction of the estimated blueprint and thus creating a fake object, the defender is interested to introduce some distortions during the reproduction of blueprint t.sub.i into a physical object x.sub.i. These distortions will prevent the HC and ML attacks from an accurate estimation of {tilde over (t)}.sub.1. However, at the same time, this creates a considerable difference between the blueprint t.sub.i and its fingerprint x.sub.i even for the authentic objects that makes the distinction between the originals and fakes challenging. That is why the usage of a sole direct system based authentication might be insufficient in view of low accuracy of prediction of {tilde over (t)} from y. At the same time, the usage of a sole reverse system based authentication is based on the generation of {tilde over (t)}.sub.1 from t.sub.i and its comparison with the probe y. If the level of distortions is high and the distortions are produced at random, the accuracy of prediction might also reduce. That is why the two-way system has several additional options when x.sub.i is compared with y directly and {tilde over (x)}.sub.1 is compared with {circumflex over (x)}.sub.1 and with x.sub.i. The same is valid for the fingerprint part. The advantage of the two-way system is that all possible combinations are present at the multi-metric feature vector and the OC-classifier can automatically choose such a metric or dimension representing the most informative component for a particular case of interest.
[0157] To demonstrate the advantages of such multi-metric system in practice, a simple setup of the direct path:
Λ.sup.x(E,D)=d.sub.l.sub.
was trained, i.e. based on only similarity metrics and no discriminators. Once the encoder-decoder pair was trained, the feature vector combining several multi-metrics was constructed.
[0158]
[0159]
[0160] Furthermore, the method according to the present invention can be also trained not only on the original paired data but also on the unpaired data. To exemplify this possibility, we will consider the reverse path whereas the direct path is applied symmetrically as demonstrated above. The methods representing the reverse path of the proposed method for the unpaired data are presented in
[0161]
[0162]
Generation of Synthetic Examples
[0163] A system according to the present invention can also be used for the generation of synthetic samples of both original fingerprints and fakes. The availability of synthetic samples might be of a great advantage to enhance the accuracy of authentication as it will be explained below. For example, the trained decoder (210) of systems presented in
[0164] At the same time, the parameters determining the training of the system such as the above mentioned Lagrangian coefficients can be chosen depending on the wished proximity of the produced synthetic samples to the reference ones. To exemplify these possibilities, we will use the two-way system represented by the direct path:
Λ.sup.x(E,D)=d.sub.l.sub.
and by the reverse path:
Λ.sup.t(E,D)=d.sub.l.sub.
[0165] The encoder and decoder of this system are implemented as U-NET architecture for the demonstration purposes. This is a very simple system with the fast convergence and we train it on the originals and four types of fakes presented in
[0166] Several examples of generated synthetic originals are shown in
Usage of Generation of Synthetic Examples
[0167] The generated synthetic samples simulating both originals and fakes can be used in several ways for the enhanced authentication. To demonstrate these possibilities without sake of generality we will assume that only original fingerprints are available while collecting real fakes from physical objects represents a great challenge in view of the large variability of possible attacking strategies. At the same time, to acquire 100-200 images from the produced objects does not represent any significant time or cost engagement in view of the existing quality control stages at many manufacturing processes. Furthermore, we will only consider the paired setup in view of the above whereas the consideration of the unpaired system is also straightforward according to the above considerations.
[0168] Therefore, given the training dataset {t.sub.i,x.sub.i}.sub.i=1.sup.N, the above described synthetic sample generation system is trained only on the originals with three parameters λ=1,λ=10,λ=25. New unseen at the training stage samples are passed via the trained decoders to map t.fwdarw.{tilde over (x)}. To visualize the effect of training, the latent space of a fully supervised classifier trained to classify the originals and four types of fakes was used. That is why the latent space of this classifier reflects the relative location of the manifolds of considered classes. The synthetic samples for three values of λ=1,λ=10,λ=25 are passes via this classifier and the latent space representing the pre-final layer before the soft-max output is visualized in a from of t-SNE diagram in
[0169] The first operational case is shown in
[0170] The second operational case is shown in
[0171] Finally, the third operational case shown in
[0172] The two last cases represent several possibilities how the synthetic samples can be used for the authentication enhancement.
[0173] As the first option one can use the physical original fingerprints and generated “fakes” to train a supervised binary classifier. Assuming that the fakes are generated closely to the manifold of the originals and considered as the worst case fakes in the sense of proximity, the classifier decision boundary trained on these two classes will also reliably distinguish the other types of fakes that are on the large “distance” from this decision boundary. The experimental results validate that such a classifier can robustly reject all fakes even without knowing the origin of the physical fakes and without having seen them at training stage.
[0174] As the second option one can train the triple loss system shown in
[0175] As another embodiment, one can use the synthetic examples and train the direct, reverse or two-way system in a contrastive way. An example of such a contrastive training of the direct system is schematically shown in
[0176] It is important to point out that the maximization and minimization of the above mutual information terms is performed on the shared encoder (200) and decoder (210). Therefore, both the encoder and decoders are trained to provide the maximum close reconstruction to the original class, if the input resembles to be an original and otherwise to the class of fakes. Finally, at the stage 2, the outputs of blocks (500) and (400) form a feature vector that is used for the training of the OC-classifier (250).
Security and Fingerprint-Based Authentication
[0177] In addition, it should be noted that the considered pairs (t.sub.i,x.sub.i) can come from either a physical distribution p(t, x) or from the pair-wise assignments when each physical fingerprint x.sub.i is assigned to some blueprint t.sub.i. In this case, the assigned blueprint t.sub.i might be generated from some distribution and can represent some sort of a random key. This additionally creates a sort of security and protection against the ML attacks when the attacker might have an access only to the fingerprints x.sub.i acquired from the authentic objects while the blueprints can be kept in secret. The attacker can generally proceed with the uncoupled training, if the distribution from which the blueprints are generated is known as opposed to the defender who trains the authentication system in the supervised way based on the available pairs (t.sub.i,x.sub.i). The supervised training will produce higher accuracy and lead to the information advantage of the defender over the attacker.
[0178] In another embodiment of the method according to the present invention, one can consider a triple of blueprint, secrete key and fingerprint as a combination of the above disclosed methods.
[0179] In another embodiment of the method according to the present invention, the authentication system might have an access to the fingerprints only. Such a situation is typical for example in biometrics or natural randomness based PUFs applications.
[0180] In still another embodiment of the method according to the present invention, the authentication system might have an access to the blueprint only. In any of these embodiments disclosed above, the blueprint may additionally be secured and kept secret by any kind of securing means adapted for this purpose, given that manufacturers usually don't wish to disclose in detail templates used for production of their products. For example, at the stage of generating, providing or acquiring the blueprint t.sub.i, the latter may be modulated by use of a secret key k, by use of a secret mapping, by use of a space of secret carriers of a transform domain or the like, such as to produce a modulated blueprint t.sup.s.sub.i which isn't easily accessible to any potential attacker and addresses security issues on the side of the manufacturers of the objects to be authenticated.
Encoder and Decoder Architectures
[0181] The encoder and decoder in the considered authentication problem can be implemented based on the same architectures. The encoder receives as an input the fingerprint and outputs the blueprint and the decoder receives as an input the blueprint and outputs the fingerprint. Therefore, to proceed with a general consideration we will assume that the input to the encoder/decoder is a and the output is b.
[0182] The encoder/decoder structure can be deterministic, i.e. perform one-to-one mapping of a to b, or stochastic when for one input a the encoder/decoder might generate multiple b.sub.s. In the deterministic case, the encoder/decoder can be implemented based on for example U-NET architecture [74], several CNN downsampling layers followed by several ResNet layers acting as a transformer of distribution and several CNN upsampling layers [98], where the ResNet transformation layers can be also replaced by other transformation models such as normalizing FLOW [102], neural network FLOWS [103] or similar architectures. All these structured can be summarized as the architectures consisting of conv-layers, transformation layers and decov-layers with the particularities how to implement each of them for a a particular problem of interest. The training of the encoder/decoder is also based on an element of stochasticity introduced by the permutations of input data based on sample-wise non-linear transformations, addition of noise, etc., filtering as well as geometrical transformations.
[0183] The stochastic encoder/decoder structure is additionally complemented by a source of randomness that can be injected at the input, in the latent space or at the decoder by concatenation, addition, multiplication, etc. operations.
[0184] The discriminators can be considered on the global level considered in the blueprint-fingerprint as whole or on the local level applied to parts of blueprint-fingerprint with the corresponding fusion of local discriminators' outputs.
[0185] The proposed method can be used in a number of applications. Without pretending to be exhaustive in the following overview, just a few of them will be named by assuming that similar setups are easily recognized by a person skilled in the art.
[0186] The authentication of watches is an example of use of the method for authentication of physical objects described above and, in particular, typically represents an application of the method to authentication of batch objects, but may also represent an application of the method to authentication of individually created objects, like may be the case for certain types of luxury watches produced only in very limited numbers or even only as a single piece, even though production is of course based on a blueprint. In general, the authentication of watches and of any other type of (high value or luxury) products is based on the measurement of the proximity of the manufactured watch/product to the blueprint, where the blueprint is considered as a common design for the batch of watches/products of a given model, preferably in combination with individual features of each watch/product which are present in the fingerprint and which correspond to the physical unclonebale features of an individual watch/product within said batch of watches/products. These features might be of both natural and/or artificial origin, such as mentioned in the introduction of the present patent application in the context of describing four basic groups of prior art approaches. Authentication of watches may be performed from the front and/or back sides of the watch and/or of its components, for example such as seen through the watch glass or through a skeleton back side of a watch, where the blueprint represents the artwork design given in any suitable encoding form and the fingerprint represents signals acquired from a physical object. The method can be applied to watches and/or its components indifferently from which materials these are produced. Preferably, the authentication concerns the elements of the watch design where both the blueprint-fingerprint pairs are available in any form suitable for easy and robust verification. The authentication can also link the joint authentication of elements of the watch and of the watch packaging or the watch and a corresponding watch certificate. If so required, some extra synchronisation based on the watch design might be added. Imaging of the watch may be realized by a mobile phone equipped with a camera, a portable microscope, or any other device adapted for this purpose. Furthermore, the imaging modalities of the acquisition device may be adapted to provide for acquisition of probe signals through the watch glass without the necessity to disassemble the watch for performing the authentication.
[0187] The authentication of packaging in any applications requiring the security features against counterfeiting and brand protection when the blueprint represents the design of artwork encoded in any graphical format and the fingerprint represents signals acquired from the printed packaging. The packaging can be considered but not limited to a primary packaging such a syringe, a capsule, a bottle, etc., and a secondary packaging such as a box, a special shipment box or a container.
[0188] The authentication of banknotes in any form of printing including the embedded security features.
[0189] The authentication of elements of designs represented by various encoded modalities such as 1D and 2D codes, elements of design including the halftone patterns represented in any form of reproduction in black and white and color representation, security elements representing various special patterns difficult to clone or reproduce ranging from simple random patterns to complex guilloche ones, or special security taggants.
[0190] The authentication of printed text, logos and stamps reproduced on any documents such as contracts, certificates, forms, etc. . . .
[0191] The authentication of holograms in any form of reproduction.
[0192] The authentication of payment cards in any part of graphical and embedded elements of text, design, chips, etc. . . .
[0193] The authentication of identity documents includes but not limited to identification documents such as I.sub.D cards, passports, visas, etc. when the blueprint can be represented by human biometrics stored in printed form or on storage device and the fingerprint representing signals acquired from person.
[0194] At the same time, the above examples do not exclude that the proposed methods are applicable to many kinds of products, which are (but are not limited to) the following: anti-counterfeiting labels or packaging, boxes, shipping invoices, tax stamps, postage stamps and various printed documents associated with the product for authentication and certification of its origin; medical prescriptions; medicines and pharmaceutical products including but not limited to cough drops, prescription drugs, antibiotics, vaccines, etc.; adulterated food, beverages, alcohol as well as coffee and chocolate; baby food and children toys; clothing, footwear and sportswear; health, skin care products, personal care and beauty aids items including perfume, cosmetics, shampoo, toothpaste, etc.; household cleaning goods; luxury goods including watches, clothing, footwear, jewellery, glasses, cigarettes and tobacco, products from leather including handbags, gloves, etc. and various objects of art; car, helicopter and airplane parts and electronic chipsets for computers, phones and consumer electronics; prepaid cards for communications or other services using similar protocol of credit recharging; computer software, video and audio tapes, CDs, DVDs and other means of multimedia data storage with music, movies and video games.
[0195] The proposed authentication should also provide a secure link to the blockchain records.
[0196] The invention should be considered as comprising all possible combinations of every feature described in the instant specification, appended claims, and/or drawing figures, which may be considered new, inventive and industrially applicable. In particular, other characteristics and embodiments of the invention are described in the appended claims.
[0197] The following list enumerates all references which are cited in the above description: [0198] [1] Sviatoslav Voloshynovskiy, Oleksiy Koval, and Thierry Pun, “Secure item identification and authentication system based on unclonable features,” Patent, 22 Apr. 2014, U.S. Pat. No. 8,705,873. [0199] [2] Frederic Jordan, Martin Kutter, and Cline Di Venuto, “Means for using microstructure of materials surface as a unique identifier,” Patent, 15 Mar. 2007, WO2007/028799. [0200] [3] Kariakin Youry, “Authentication of articles,” Patent, 10 Jul. 1997, WO1997/024699. [0201] [4] Russell Paul Cowburn, “Methods and apparatuses for creating authenticatable printed articles and subsequently verifying them,” Patent, 22 Sep. 2005, WO2005/088517. [0202] [5] Chau-Wai Wong and Min Wu, “Counterfeit detection based on unclonable feature of paper using mobile camera,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 8, pp. 1885-1899, 2017. [0203] [6] Rudolf Schraml, Luca Debiasi, and Andreas Uhl, “Real or fake: Mobile device drug packaging authentication,” in Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security, New York, N.Y., USA, 2018, IHMMSec18, pp. 121-126, Association for Computing Machinery. [0204] [7] Paul Lapstun and Kia Silverbrook, “Object comprising coded data and randomly dispersed ink taggant,” Patent, 5 Mar. 2013, U.S. Pat. No. 8,387,889. [0205] [8] Riikka Arppe and Thomas Just Srensen, “Physical unclonable functions generated through chemical methods for anti-counterfeiting,” Nature Reviews Chemistry, vol. 1, no. 4, 2017. [0206] [9] Miguel R. Carro-Temboury, Riikka Arppe, Tom Vosch, and Thomas Just Sorensen, “An optical authentication system based on imaging of excitation-selected lanthanide luminescence,” Science Advances, vol. 4, no. 1, 2018. [0207] [10] Ali Valehi, Abolfazl Razi, Bertrand Cambou, Weijie Yu, and Michael Kozicki, “A graph matching algorithm for user authentication in data networks using image-based physical unclonable functions,” in 2017 Computing Conference, 2017, pp. 863-870. [0208] [11] Sviatoslav Voloshynovskiy, Maurits Diephuis, and Taras Holotyak, “Mobile visual object identifcation: from SIFT-BoF-RANSAC to SketchPrint,” in Proceedings of SPIE Photonics West, Electronic Imaging, Media Forensics and Security V, San Francisco, USA, Jan. 13, 2015. [0209] [12] Maurits Diephuis, Micro-structure based physical object identification on mobile platforms, Ph.D. thesis, University of Geneva, 2017. [0210] [13] F. Beekhof, S. Voloshynovskiy, and F. Farhadzadeh, “Content authentication and identification under informed attacks,” in Proceedings of IEEE International Workshop on Information Forensics and Security, Tenerife, Spain, Dec. 2-5 2012. [0211] [14] Maurits Diephuis, “A framework for robust forensic image identification,” M.S. thesis, University of Twente, 2010. [0212] [15] Maurits Diephuis, Svyatoslav Voloshynovskiy, Taras Holotyak, Nabil Stendardo, and Bruno Keel, “A framework for fast and secure packaging identification on mobile phones,” in Media Watermarking, Security, and Forensics 2014, Adnan M. Alattar, Nasir D. Memon, and Chad D. Heitzenrater, Eds. International Society for Optics and Photonics, 2014, vol. 9028, pp. 296-305, SPIE. [0213] [16] Justin Picard, “Digital authentication with copy-detection patterns,” in Optical Security and Counterfeit Deterrence Techniques V, Rudolf L. van Renesse, Ed. International Society for Optics and Photonics, 2004, vol. 5310, pp. 176-183, SPIE. [0214] [17] Justin Picard and Paul Landry, “Two dimensional barcode and method of authentication of such barcode,” Patent, 14 Mar. 2017, U.S. Pat. No. 9,594,993. [0215] [18] Iuliia Tkachenko and Christophe Destruel, “Exploitation of redundancy for pattern estimation of copy-sensitive two level QR code,” in 2018 IEEE International Workshop on Information Forensics and Security (WIFS), 2018, pp. 1-6. [0216] [19] Ken ichi Sakina, Youichi Azuma, and Kishi Hideaki, “Two-dimensional code authenticating device, two-dimensional code generating device, two-dimensional code authenticating method, and program,” Patent, 6 Sep. 2016, U.S. Pat. No. 9,436,852. [0217] [20] Zbigniew Sagan, Justin Picard, Alain Foucou, and Jean-Pierre Massicot, “Method and device superimposing two marks for securing documents against forgery with,” Patent, 27 May 2014, U.S. Pat. No. 8,736,910. [0218] [21] Svyatoslav Voloshynovskiy and Maurits Diephuis, “Method for object recognition and/or verification on portable devices,” Patent, 7 Oct. 2018, U.S. Pat. No. 10,019,646. [0219] [22] Thomas Dewaele, Maurits Diephuis, Taras Holotyak, and Sviatoslav Voloshynovskiy, “Forensic authentication of banknotes on mobile phones,” in Proceedings of SPIE Photonics West, Electronic Imaging, Media Forensics and Security V, San Francisco, USA, Jan. 14-18, 2016. [0220] [23] Volker Lohweg, Jan Leif Homann, Helene Drksen, Roland Hildebrand, Eugen Gillich, Jrg Hofmann, and Johannes Georg Schaede, “Authentication of security documents and mobile device to carry out the authentication,” Patent, 17 Apr. 2018, U.S. Pat. No. 9,947,163. [0221] [24] Sergej Toedtli, Sascha Toedtli, and Yohan Thibault, “Method and apparatus for proving an authentication of an original item and method and apparatus for determining an authentication status of a suspect item,” Patent, 24 Jan. 2017, U.S. Pat. No. 9,552,543. [0222] [25] Guy Adams, Stephen Pollard, and Steven Simske, “A study of the interaction of paper substrates on printed forensic imaging,” in Proceedings of the 11th ACM Symposium on Document Engineering, New York, N.Y., USA, 2011, DocEng '11, p. 263266, Association for Computing Machinery. [0223] [26] Stephen B. Pollard, Steven J. Simske, and Guy B. Adams, “Model based print signature profile extraction for forensic analysis of individual text glyphs,” in 2010 IEEE International Workshop on Information Forensics and Security, 2010, pp. 1-6. [0224] [27] Yanling Ju, Dhruv Saxena, Tamar Kashti, Dror Kella, Doron Shaked, Mani Fischer, Robert Ulichney, and Jan P. Allebach, “Modeling large-area influence in digital halftoning for electrophotographic printers,” in Color Imaging XVII. Displaying, Processing, Hardcopy, and Applications, Reiner Eschbach, Gabriel G. Marcu, and Alessandro Rizzi, Eds. International Society for Optics and Photonics, 2012, vol. 8292, pp. 259-267, SPIE. [0225] [28] Renato Villan, Sviatoslav Voloshynovskiy, Oleksiy Koval, and Thierry Pun, “Multilevel 2-d bar codes: Toward high-capacity storage modules for multimedia security and management,” IEEE Transactions on Information Forensics and Security, vol. 1, no. 4, pp. 405-420, 2006. [0226] [29] Thomas M Cover and Joy A Thomas, Elements of information theory, John Wiley & Sons, 2012. [0227] [30] Olga Taran, Slavi Bonev, and Slava Voloshynovskiy, “Clonability of anti-counterfeiting printable graphical codes: a machine learning approach,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, May 2019. [0228] [31] Rohit Yadav, Iuliia Tkachenko, Alain Trémeau, and Thierry Fournel, “Estimation of copy-sensitive codes using a neural approach,” in 7th ACM Workshop on Information Hiding and Multimedia Security, Paris, France, July 2019. [0229] [32] Burt Perry, Scott Carr, and Phil Patterson, “Digital watermarks as a security feature for identity documents,” in Optical Security and Counterfeit Deterrence Techniques III, Rudolf L. van Renesse and Willem A. Vliegenthart, Eds. International Society for Optics and Photonics, 2000, vol. 3973, pp. 80-87, SPIE. [0230] [33] Frederic Deguillaume, Sviatoslav Voloshynovskiy, and Thierry Pun, “Character and vector graphics watermark for structured electronic documents security,” Patent, 5 Jan. 2010, U.S. Pat. No. 7,644,281. [0231] [34] Pillai Praveen Thulasidharan and Madhu S. Nair, “QR code based blind digital image watermarking with attack detection code,” AEU-International Journal of Electronics and Communications, vol. 69, no. 7, pp. 1074-1084, 2015. [0232] [35] Guangmin Sun, Rui Wang, Shu Wang, Xiaomeng Wang, Dequn Zhao, and Andi Zhang, “High-definition digital color image watermark algorithm based on QR code and DWT,” in 2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA), 2015, pp. 220-223. [0233] [36] Yang-Wai Chow, Willy Susilo, Joseph Tonien, and Wei Zong, “A QR code watermarking approach based on the DWT-DCT technique,” in Information Security and Privacy, Josef Pieprzyk and Suriadi Suriadi, Eds. 2017, pp. 314-331, Springer International Publishing. [0234] [37] Xiaofei Feng and Xingzhong Ji, “A blind watermarking method with strong robust based on 2d-barcode,” in 2009 International Conference on Information Technology and Computer Science, 2009, vol. 2, pp. 452-456. [0235] [38] Weijun Zhang and Xuetian Meng, “An improved digital watermarking technology based on QR code,” in 2015 4th International Conference on Computer Science and Network Technology (ICCSNT), 2015, vol. 01, pp. 1004-1007. [0236] [39] Sartid Vongpradhip and Suppat Rungraungsilp, “QR code using invisible watermarking in frequency domain,” in 2011 Ninth International Conference on ICT and Knowledge Engineering, 2012, pp. 47-52. [0237] [40] Li Li, Ruiling Wang, and Chinchen Chang, “A digital watermark algorithm for QR code,” International Journal of Intelligent Information Processing, vol. 2, no. 2, pp. 29-36, 2011. [0238] [41] Jantana Panyavaraporn, Paramate Horkaew, and Wannaree Wongtrairat, “QR code watermarking algorithm based on wavelet transform,” in 2013 13th International Symposium on Communications and Information Technologies (ISCIT), 2013, pp. 791-796. [0239] [42] Ming Sun, Jibo Si, and Shuhuai Zhang, “Research on embedding and extracting methods for digital watermarks applied to QR code images,” New Zealand Journal of Agricultural Research, vol. 50, no. 5, pp. 861-867, 2007. [0240] [43] Rongsheng Xie, Chaoqun Hong, Shunzhi Zhu, and Dapeng Tao, “Anti-counterfeiting digital watermarking algorithm for printed QR barcode,” Neurocomputing, vol. 167, pp. 625-635, 2015. [0241] [44] Pei-Yu Lin, Yi-Hui Chen, Eric Jui-Lin Lu, and Ping-Jung Chen, “Secret hiding mechanism using QR barcode,” in 2013 International Conference on Signal-Image Technology Internet-Based Systems, 2013, pp. 22-25. [0242] [45] Ari Moesriami Barmawi and Fazmah Arif Yulianto, “Watermarking QR code,” in 2015 2nd International Conference on Information Science and Security (ICISS), 2015, pp. 1-4. [0243] [46] Thach V. Bui, Nguyen K. Vu, Thong T.P. Nguyen, Isao Echizen, and Thuc D. Nguyen, “Robust message hiding for QR code,” in 2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2014, pp. 520-523. [0244] [47] Iuliia Tkachenko, William Puech, Christophe Destruel, Olivier Strauss, Jean-Marc Gaudin, and Christian Guichard, “Two-level QR code for private message sharing and document authentication,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 3, pp. 571-583, 2016. [0245] [48] Iu. Tkachenko, W. Puech, O. Strauss, C. Destruel, and J.-M. Gaudin, “Printed document authentication using two level or code,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 2149-2153. [0246] [49] Yugiao Cheng, Zhengxin Fu, Bin Yu, and Gang Shen, “A new two-level QR code with visual cryptography scheme,” Multimedia Tools and Applications, vol. 77, no. 16, pp. 2062920649, 2018. [0247] [50] H. Phuong Nguyen, Agns Delahaies, Florent Retraint, D. Huy Nguyen, Marc Pic, and Frederic Morain-Nicolier, “A watermarking technique to secure printed QR codes using a statistical test,” in 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2017, pp. 288-292. [0248] [51] Hoai Phuong Nguyen, Florent Retraint, Frdric Morain-Nicolier, and Angs Delahaies, “A watermarking technique to secure printed matrix barcode application for anti-counterfeit packaging,” IEEE Access, vol. 7, pp. 131839-131850, 2019. [0249] [52] Tailing Yuan, Yili Wang, Kun Xu, Ralph R. Martin, and Shi-Min Hu, “Two-layer QR codes,” IEEE Transactions on Image Processing, vol. 28, no. 9, pp. 4413-4428, 2019. [0250] [53] Martin Kutter, Sviatoslav V. Voloshynovskiy, and Alexander Herrigel, “Watermark copy attack,” in Security and Watermarking of Multimedia Contents II, Ping Wah Wong and Edward J. Delp III, Eds. International Society for Optics and Photonics, 2000, vol. 3971, pp. 371-380, SPIE. [0251] [54] Frederic Jordan, Martin Kutter, and Nicolas Rudaz, “Method to apply an invisible mark on a media,” Patent, 21 Jun. 2011, U.S. Pat. No. 7,965,862. [0252] [55] Alastair Reed, Tom′aA.° .sub.i Filler, Kristyn Falkenstern, and Yang Bai, “Watermarking spot colors in packaging,” in Media Watermarking, Security, and Forensics 2015, March 2015, vol. 9409 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pp. 940-906. [0253] [56] Svyatoslav Voloshynovskiy, “Method for active content fingerprinting,” Patent, 17 Oct. 2017, U.S. Pat. No. 9,794,067. [0254] [57] Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel, “Adversarial attacks on neural network policies,” 2017. [0255] [58] Nicholas Carlini and David Wagner, “Towards evaluating the robustness of neural networks,” in 2017 IEEE Symposium on Security and Privacy (SP), 2017, pp. 39-57. [0256] [59] Anh Thu Phan Ho, Bao An Hoang Mai, Wadih Sawaya, and Patrick Bas, “Document Authentication Using Graphical Codes: Impacts of the Channel Model,” in ACM Workshop on Information Hiding and Multimedia Security, Montpellier, France, June 2013, pp. ACM 978-1-4503-2081-8/13/06. [0257] [60] Christopher M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics), Springer-Verlag, Berlin, Heidelberg, 2006. [0258] [61] Kevin P. Murphy, Machine learning: a probabilistic perspective, MIT Press, Cambridge, Mass. [u.a.], 2013. [0259] [62] Ian J. Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, Cambridge, Mass., USA, 2016, url http://www.deeplearningbook.org. [0260] [63] Olga Taran, Slavi Bonev, Taras Holotyak, and Slava Voloshynovskiy, “Adversarial detection of counterfeited printable graphical codes: Towards “adversarial games” in physical world,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 2812-2816. [0261] [64] Ashlesh Sharma, Lakshminarayanan Subramanian, and Yiduth Srinivasan, “Authenticating physical objects using machine learning from microscopic variations,” Patent application, 2 Feb. 2017, U.S. patent application Ser. No. 15/302,866. [0262] [65] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio, “Adversarial examples in the physical world,” CoRR, vol. abs/1607.02533, 2016. [0263] [66] Neil. A. Macmillan and C. Douglas. Creelman, Detection Theory: A user's guide, Lawrence Erlbaum Associates, Mahwah, N.J., London, 2005. [0264] [67] H. Vincent Poor, An Introduction to Signal Detection and Estimation, Springer-Verlag, Berlin, Heidelberg, 2013. [0265] [68] Diederik P. Kingma and Max Welling, “Auto-encoding variational bayes,” in 2nd International Conference on Learning Representations, ICLR 2014, Ban, AB, Canada, Apr. 14-16, 2014, Conference Track Proceedings, 2014. [0266] [69] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow, “Adversarial autoencoders,” CoRR, vol. abs/1511.05644, 2015. [0267] [70] Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Muller, and Marius Kloft, “Deep one-class classification,” in Proceedings of the 35th International Conference on Machine Learning, Jennifer Dy and Andreas Krause, Eds. 10-15 Jul. 2018, vol. 80 of Proceedings of Machine Learning Research, pp. 4393-4402, PMLR. [0268] [71] Mohammad Sabokrou, Mohammad Khalooei, Mahmood Fathy, and Ehsan Adeli, “Adversarially learned one-class classifier for novelty detection,” in CVPR. 2018, pp. 3379-3388, IEEE Computer Society. [0269] [72] Mohammadreza Salehi, Ainaz Eftekhar, Niousha Sadjadi, Mohammad Hossein Rohban, and Hamid R. Rabiee, “Puzzle-AE: Novelty detection in images through solving puzzles,” CoRR, vol. abs/2008.12959, 2020. [0270] [73] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778. [0271] [74] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention MICCAI 2015, May 2015. [0272] [75] Paul Bergmann, Kilian Batzner, Michael Fauser, David Sattlegger, and Carsten Steger, “The mvtec anomaly detection dataset: A comprehensive real-world dataset for unsupervised anomaly detection,” International Journal of Computer Vision, vol. 129, no. 4, pp. 1038-1059, 2021. [0273] [76] Nobuyuki Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979. [0274] [77] Yunqiang Chen, Xiang Sean Zhou, and T.S. Huang, “One-class SVM for learning in image retrieval,” in Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 2001, vol. 1, pp. 34-37 vol. 1. [0275] [78] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm, “Mutual information neural estimation,” in Proceedings of the 35th International Conference on Machine Learning, Jennifer Dy and Andreas Krause, Eds. 10-15 Jul. 2018, vol. 80 of Proceedings of Machine Learning Research, pp. 531-540, PMLR. [0276] [79] Slava Voloshynovskiy, Mouad Kondah, Shideh Rezaeifar, Olga Taran, Taras Hotolyak, and Danilo Jimenez Rezende, “Information bottleneck through variational glasses,” in NeurIPS Workshop on Bayesian Deep Learning, Vancouver, Canada, December 2019. [0277] [80] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems —Volume 2, Cambridge, Mass., USA, 2014, NIPS' 14, p. 26722680, MIT Press. [0278] [81] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori, Density Ratio Estimation in Machine Learning, Cambridge University Press, USA, 1st edition, 2012. [0279] [82] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka, “f-gan: Training generative neural samplers using variational divergence minimization,” 2016. [0280] [83] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori, “Density-ratio matching under the Bregman divergence: a unifed framework of density-ratio estimation,” Annals of the Institute of Statistical Mathematics, vol. 64, no. 5, pp. 1009-1044, October 2012. [0281] [84] Martin Arjovsky, Soumith Chintala, and Lon Bottou, “Wasserstein Generative Adversarial Networks,” 2017, cite arxiv:1701.07875. [0282] [85] Kacper Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, and Arthur Gretton, “Fast two-sample testing with analytic representations of probability measures,” in Proceedings of the 28th International Conference on Neural Information Processing Systems—Volume 2, Cambridge, Mass., USA, 2015, NIPS'15, p. 19811989, MIT Press. [0283] [86] Kilian Q Weinberger, John Blitzer, and Lawrence Saul, “Distance metric learning for large margin nearest neighbor classification,” in Advances in Neural Information Processing Systems, Y Weiss, B. Scholkopf, and J. Platt, Eds. 2006, vol. 18, MIT Press. [0284] [87] Kihyuk Sohn, “Improved deep metric learning with multi-class N-pair loss objective,” in Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds. 2016, vol. 29, Curran Associates, Inc. [0285] [88] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton, “A simple framework for contrastive learning of visual representations,” in Proceedings of the 37th International Conference on Machine Learning, Hal Daum III and Aarti Singh, Eds. 13-18 Jul. 2020, vol. 119 of Proceedings of Machine Learning Research, pp. 1597-1607, PMLR. [0286] [89] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan, “Supervised contrastive learning,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds. 2020, vol. 33, pp. 18661-18673, Curran Associates, Inc. [0287] [90] Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian, “Practical poissonian-gaussian noise modeling and fitting for single-image raw-data,” IEEE Transactions on Image Processing, vol. 17, no. 10, pp. 1737-1754, 2008. [0288] [91] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, and Alexander Smola, “A kernel two-sample test,” J. Mach. Learn. Res., vol. 13, no. null, pp. 723773, March 2012. [0289] [92] George Papamakarios, Eric T. Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan, “Normalizing flows for probabilistic modeling and inference,” CoRR, vol. abs/1912.02762, 2019. [0290] [93] Conor Durkan, Artur Bekasov, lain Murray, and George Papamakarios, “Neural spline flows,” in Advances in Neural Information Processing Systems. 2019, vol. 32, Curran Associates, Inc.