Method and device for contactless biometrics identification
09734165 · 2017-08-15
Assignee
Inventors
Cpc classification
G06V40/1359
PHYSICS
G06V10/60
PHYSICS
G06V40/1376
PHYSICS
International classification
Abstract
The present invention provides a new method and a device for the contactless human identification using biometrics images. The present invention develops a robust feature extraction algorithm to recover three-dimensional (3D) shape information from biometrics images. Further, it provides significantly improved performance than what is possible from the state-of-art methods, adding practicality for real applications on mobile platform, smartphones, and also as add-on system for conventional fingerprint system. The present invention's unique advantages are based on its computational simplicity, efficient matching and requiring least storage. Experiments were conducted to confirm very high accuracy and reliability on a number of biometric modalities including iris, palmprint, and finger knuckle images.
Claims
1. A computer implemented method for performing biometrics identification, comprising: acquiring one or more biometrics images of a person; extracting one or more features using a specially designed filter for recovering three-dimensional (3D) shape information from the biometric images to generate an acquired original feature template; denoising the acquired original feature template to generate an acquired denoised feature template; and generating a consolidated match score using a combination of a first weighted distance between the acquired original feature template and a stored original feature template, and a second weighted distance between the acquired denoised feature template and a stored denoised feature template; wherein the stored original feature template is generated from a previously acquired one or more biometrics images of a person and stored in a registration database; and wherein the stored denoised feature template is generated from denoising the stored original feature template and stored in the registration database.
2. The method according to claim 1, wherein the biometric images being aligned to locate one or more common regions using special domain techniques or spectral domain techniques.
3. The method according to claim 2, wherein the special domain techniques include correlation; and wherein the spectral domain techniques include 2D FFT.
4. The method according to claim 1, wherein biometric modality in the biometric images represent two-dimensional (2D) information including iris, palmprint, face, and finger knuckle, or 3D information.
5. The method according to claim 1, wherein the consolidated match score is generated using a dynamic combination of a first distance between the acquired original feature template and a stored original feature template, and a second distance between the acquired denoised feature template and a stored denoised feature template.
6. The method according to claim 1, where a mobile phone is used to acquire the biometric images under multiple illuminations using one or more illumination sources including natural or ambient light source, LED, and camera flash.
7. The method according to claim 1, wherein the acquisition of the biometrics images of a person comprising automatically and simultaneously acquiring hand dorsal images containing one or more finger knuckle patterns, and one or more fingerprint images.
8. The method according to claim 1, wherein the acquisition of the biometrics images comprising using a slap-fingerprint system to acquire one or more fingerprint images along with the acquisition of hand dorsal images.
9. The method according to claim 8, wherein a first consolidated match score corresponding to the acquired fingerprint images and a second consolidated match score corresponding to the acquired hand dorsal images are combined to more reliably identify the person.
10. A device for biometrics identification, comprising: a first means to automatically and simultaneously acquire hand dorsal images containing one or more finger knuckle patterns and one or more fingerprint images; and a computing device to identify a person using the method according to claim 1.
11. A system for biometrics identification, comprising: a slap-fingerprint system for acquiring one or more fingerprint images; and a computing device for acquiring one or more hand dorsal images and identifying a person using the method according to claim 1 with the input of the fingerprint images and the hand dorsal images: wherein the fingerprint images and the hand dorsal images are acquired simultaneously.
12. The system according to claim 11, wherein a first consolidated match score corresponding to the acquired fingerprint images and a second consolidated match score corresponding to the acquired hand dorsal images are combined to more reliably identify the person.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments of the invention are described in more detail hereinafter with reference to the drawings, in which
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION
(14) In the following description, methods, systems, and devices of contactless biometrics identification and the like are set forth as preferred examples. It will be apparent to those skilled in the art that modifications, including additions and/or substitutions may be made without departing from the scope and spirit of the invention. Specific details may be omitted so as not to obscure the invention; however, the disclosure is written to enable one skilled in the art to practice the teachings herein without undue experimentation.
(15) In accordance to one aspect of the present invention, the low-cost contactless biometrics matching system uses low-cost imaging camera, which can also be integrated with existing slap fingerprint devices used for the law-enforcement (immigration crossings, Nation ID cards, etc.). This system uses an unique feature recovery and matching algorithm, which recovers 3D shape information from contactless 2D images and efficiently encodes using specially designed filters.
A. RELATIVE PIXELS MEASUREMENTS IN ORTHOGONAL DIRECTIONS
(16) The present invention explores relative measurements, referred to as explore ordinal measurements [1]-[2]. The Lambertian model is utilized to describe typical contactless imaging for the palmprint biometric modality. Using the Lambertian model, the pixel value I(i, t) in image I at point P.sub.i is jointly determined by reflectance k.sub.d, illumination intensity l.sub.d, illumination direction vector L.sup.1 and point normal vector N.sub.i.
I(i,t)=k.sub.dl.sub.dLN.sub.i (1)
The objects like palmprint or human faces, whose imaged surface can be considered close to being flat, a reasonable assumption′ for the illumination direction L is that the object is not illuminated from behind the surface (front lighting shown in
(17) In order to minimize or eliminate the adverse impact of illumination variations during the contactless biometrics imaging, recovery of relatively stable features using ordinal measurements in the same direction as the surface normal or opposite to the surface normal is considered. In other words, if the feature represents the direction approximate to the direction of image plane normal, this feature is represented or encoded with the value 1 otherwise 0. Under the Cartesian coordinate representation shown in figure, the direction of vector {right arrow over (Z)} is same as the image plane normal n. Therefore, the z-component of point normal vector N.sub.l is chosen to represent our feature.
(18) It is further defined point normal N.sub.i(x.sub.i,y.sub.i,z.sub.i) s.t. x.sub.i.sup.2+y.sub.i.sup.2+z.sub.i.sup.2=.sub.1. According to the above analysis, it is encoded the feature F.sub.i at point p.sub.i as follows:
F.sub.i=τ(Z.sub.i,1−Z.sub.i,2) (2)
where τ(α) is the encoding function and is defined as
(19)
Z.sub.i,j represents the z component of N.sub.n
Z.sub.i,j=τ.sub.P.sub.
S.sub.i,j is the j.sup.th subset of the neighboring points of P.sub.i.
B. 3D BIOMETRIC SHAPE FROM TEXTURE DETAILS
(20) To systematically describe how the 3D shape information (z-component of information as described above) is recovered in the present invention from the texture-like details observed in the 2D images: firstly, Lambertian surface assumption is used to build 2D imaging model for the shape recovery analysis; then a reliability constraint is developed to ascertain the reliability in recovering such 31) shape information; finally, the developed feature extraction method is described.
C. RELATIVE PIXEL MEASUREMENT WITH LAMBERTIAN MODEL
(21) Further to the analysis in Section A, the feature to be recovered is a kind of relative or ordinal measurement and is defined in the following. Suppose there are two images I(t.sub.1) and I(t.sub.2) acquired from the same object, under different illumination L(t.sub.1) and L(t.sub.2) respectively at time t.sub.1 and t.sub.2. R.sub.1, R.sub.2 are two small regions on the subject, the ordinal measurement between R.sub.1 and R.sub.2 in image I(t.sub.k) can be defined as
OM.sub.k(R.sub.1,R.sub.2)=τ(S(R.sub.1,t.sub.k)−S(R.sub.2,t.sub.k)) (5)
where S(R.sub.j, t.sub.k) is some arithmetic function of pixel values in region R.sub.j on image I(t.sub.k) and can also be manually defined. In previous work which is related to ordinal measure for the recognition [7]-[8], S(R.sub.j, t.sub.k) is defined as weighted summation function
S(R.sub.j,t.sub.k)=Σ.sub.P.sub.
If OM.sub.1(R.sub.1, R.sub.2) is always equal to OM.sub.2(R.sub.1, R.sub.2), the impact of illumination will be eliminated. When R.sub.1 and R.sub.2 are small and close, it is assumed k.sub.d and l.sub.d are unchanged and can be seen as constants. Combining Equation (1), OM.sub.k(R.sub.1, R.sub.2) can be rewritten as
OM.sub.k(R.sub.1,R.sub.2)=τ(L(t.sub.k)δ(R.sub.1,R.sub.2)). (7)
where
δ(R.sub.1,R.sub.2)=Σ.sub.P.sub.
The results from the ordinal measurements OM.sub.k (R.sub.1, R.sub.2) are determined by the angle between illumination vector L(t.sub.k) and δ(R.sub.1, R.sub.2). In the present invention all the weights are fixed as one for two key reasons: 1. L(t.sub.k) in (13) is the illumination direction vector and can be arbitrary. Therefore different weights for different points are not useful to robustly recover ordinal measure (13) while making the analysis more difficult, 2. The weights are for the point normals in essence. All points normal should be treated equally and it is not meaningful to give the point normals different weights.
D. RELIABILITY CONSTRAINT
(22) Defining, R.sub.2) for the region (ΔX, ΔY, ΔZ), where
ΔX=τ.sub.P.sub.
ΔY=τ.sub.P.sub.
ΔZ=τ.sub.P.sub.
Let L(t.sub.k)=(a.sub.k, b.sub.k, c.sub.k), then
τ(L(t.sub.k)δ(R.sub.1,R.sub.2))=τ(ΔXa.sub.k+ΔYb.sub.k+ΔZc.sub.k) (10)
According to the assumption for L(t.sub.k), under the frontal illumination from the coordinates shown in
τ(S(R.sub.1,t.sub.k)−S(R.sub.2,t.sub.k))=τ(ΔXa.sub.k+ΔYb.sub.k+ΔZc.sub.k)=τ(ΔZc.sub.k)=τ(ΔZc.sub.k)=τ(ΔZ) (11)
where ΔZ is same as (Z.sub.i,1−Z.sub.i,2) in Equation 2. It may be noted that S(R.sub.1, t.sub.k)−S(R.sub.2,t.sub.k) effectively represents texture-level computation because ΔZ recovers the 3D shape information in the region R.sub.1 and R.sub.2. Therefore it can reasonably be said that when constraint in (11) is satisfied, the available texture-level information can be used to recover the 3D shape information.
(23) Without loss of generality, for the case when |ΔXa.sub.k|>|ΔYb.sub.k|, the constraint can be rewritten as
(24)
The experimental result on database with significant illumination changes as presented in Section G suggests that the present inventive feature of recovering the shape information is more robust to the illumination changes.
E. FEATURE EXTRACTION
(25) The feature extraction approach is computationally simpler. The spatial regions corresponding to the given filter are divided into two subset say R.sub.1 and R.sub.2 in an M×N size spatial filter. All the entries in region R.sub.1 are set as I and the entries in region R.sub.2 are set as −1. The original image I is convoluted with this filter and the filtered image is encoded according the sign of ordinal measurement using Equation 3. Formally, the feature matrix or template from image I is generated from the feature code F(i, j) at position j) as follows:
F(i,j)=τ(f*I(i,j)) (13)
f is the filter, * is convolution operator and I(i, j) is the pixel value at point (i, j) on image I. The division of the filter into region R.sub.1 and R.sub.2 can vary to accommodate for the object and/or imaging characteristics. This invention develops such a division strategy for common biometrics, such as palmprint, in Section F.
F. PALMPRINT VERIFICATION
(26) This section focuses on the contactless palmprint verification problem. First, the geometrical properties of typical palmprint data are introduced. Then the strategy for the division the filter regions is discussed. The computation of the distance between two feature matrixes is also detailed in this section.
(27) F.1 The Characteristics of Palmprint Data
(28) Human palm surface is a complex 3D textured surface consisting of many ridges, wrinkles and lines [5]. The spatial extents of these key elements are different from each other. Generally ridges represent the smallest element while the wrinkles are larger than ridges, and the lines are the most remarkable features. All these key elements share the shape like a valley. The valley typically represents a symmetrical shape whose symmetry axis is the valley bottom.
(29) A 3D palm surface can be seen as a combination of several such symmetrical units, If the point normals N.sub.i which are largely symmetric to each other are combined, the azimuth components will be cancelled out or eliminated and a dominant orthonormal vector is generated, i.e., Σ.sub.P.sub.
(30) Considering the inequality.sup.3 in (12), it can be rewrite as
(31)
where X.sub.1−X.sub.2=ΔX and Z.sub.1−Z.sub.2=ΔZ. .sup.3For other kind of object surfaces, such as face surface, the constraint remains unchanged while the filter division (R.sub.1, R.sub.2) should be carefully designed.
(32) According to above analysis, for most of the patches.sup.4 on palm surface, it is reasonable to write that X.sub.1, X.sub.2≈0, and therefore ΔX≈0. Analysis suggests that ΔZ is expected to be effective addition of several positive values. The additive result is mainly determined by size and length of effective addition of several positive values. The additive result is mainly determined by size and length of valley. Such additive result will however be irregular since the distribution of valley's is irregular and Z.sub.1, Z.sub.2 is not expected to be the same. Therefore, it is assumed that ΔZ>φ (φ is a positive constant). In summary, for the patches on palmprint,
(33)
are largely expected to be zero and the inequality (14) is always satisfied. .sup.4The rectangular region on palmprint image is referred to as patch (
(34) F.2 Filter Designing
(35) Considering the fact that binarized feature template is generated from the (contactless imaging) noisy images, the operator or the filter should be designed in such a way to even out the positive and negative filtered results from multiple pixels. This implies that the sum of all the entries in the filter be zero and the spatial distribution of 1 or -1 be symmetric and orderly.
(36) The increase in number of cross in the partitions is expected to help in suppressing the photometric distortions by balancing the filtered results from the positive and negative values. However, too many cross will make the filter rotationally sensitive, introduce asymmetry, by reducing the symmetrical property of corresponding small patches on palmprint, which will make the assumption “X.sub.1,X.sub.2≈0” unreliable or invalid. Besides, too many cross will also make ΔZ.fwdarw.0.
(37) It may be noted that the symmetrical units (such as ridges, wrinkles and lines) representing dominant palmprint features are expected to have some width. The direction of intersection boundary (center or white line of the filter configurations shown in
(38) In accordance to one embodiment of the present invention, the second configuration (
(39)
where i, j is the index, i,jε[−B,B]. The filter size is (2B+1)×(2B+1).
(40) F.3 Template Denoising and Matching
(41) In feature extraction stage, the convolution ion is applied between the filter and acquired 2D image. Analysis in previous sections suggests that it is quite unlikely that for some patches the constraint formulated in (18) may not be satisfied for two key reasons: (i) some of the patches can still have chance where Z.sub.1≈Z.sub.2, and (ii) when the valley falls exactly on the boundary or the edges of the filter, the filter response from the small patches (i.e. the small region as shown in
(42) The noisy perturbations due to the limitations of the feature extractor are expected to be discretely distributed in the feature templates. In order to alleviate the unreliable codes caused by such noise, morphological operations, i.e. opening operation and closing operation, are performed since the proposed feature is binary and has spatial continuity. It is noted that this step does not increase the template size.
(43) The final matching distance is computed by the weighted combination of three distances,
Distance(F.sub.T,F.sub.S)=w.sub.1Dis(F.sub.T,F.sub.S)+w.sub.2Dis({tilde over (F)}.sub.T,{tilde over (F)}.sub.S)+w.sub.3Dis({circumflex over (F)}.sub.T,{circumflex over (F)}.sub.S) (16)
where F.sub.T and F.sub.S are the original feature matrix of T and S, {tilde over (F)} are {circumflex over (F)} are the results after applying closing and opening operation on feature matrix F. Dis(A, B) is defined as:
(44)
where {circle around (x)} and & are XOR and AND operation, Λ(A) computes the number of non-zero value in matrix A, M(.Math.) is the mask matrix indicating the valid region on palmprint images and is define as:
(45)
I.sub.P(i,j) is the pixel value of image P at position (i,j), w1, w2, w3 are the weights. In all the experiments, the weights are set as w1: w2:w.sub.3=3:1 to consolidate the contributions from respective components. This implies that the contribution from the noise is expected to be relatively less.sup.5 than the details. Horizontal and vertical translations are also incorporated during the matching to improve the alignment between the images/tem plates. .sup.5The experimental results without using the demising strategy also validate this argument.
G. EXPERIMENTAL VALIDATION AND RESULTS
(46) G.1 Experimental Results from the Developed Method
(47) In this section, four different publicly available palmprint databases are used to evaluate the identification and verification performance of the present invention. These four databases are: IIT Delhi Touchless Palmprint Database (Version 1.0) provided by The Hong Kong Polytechnic University, Hong Kong; The Hong Kong Polytechnic University Contac free 3D/2D Hand images Database (Ver 1.0) provided by The Hong Kong Polytechnic University, Hong Kong; The Hong Kong Polytechnic University (PolyU) Palmprint Database provided by The Biometric Research Centre, The Hong Kong Polytechnic University; and CASIA Palmprint image Database provided by Center for Biometrics and Security Research National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences. In order to facilitate fair comparisons with prior work, different protocols on these palmprint databases are employed. Three competing methods: competitive code [6], ordinal code [8], and robust line orientation code (RLOC) [4], are implemented to ascertain performance comparison.
(48) G.1.1 PolyU Contactless 2D/3D Palmprint Database
(49) This contactless palmprint database is acquired from 177 different subjects (right hand) and has 20 samples from each subject with 10 for 2D images and 10 for depth images. It also provides segmented palmprint image of 128×128 pixels. The experiments were performed on 2D part of this database. The 2D images were acquired under poor (ambient) illumination conditions and
(50) In the experiment, the first five samples of each subject acquired during first session are enrolled as training set and the rest five from second session as the test set. There are 885 samples for training/gallery and 885 samples for the probe. The receiver operating characteristics (ROC), equal error rate (EER) and cumulative match characteristics (CMC) are used to evaluate the performance. Table 1 provides the EER and average rank-one recognition comparison with the three competing methods in the literature.
(51) TABLE-US-00001 TABLE 1 The EER and rank-one recognition accuracy from different methods on PolyU 2D/3D PaImprint Database The present Method invention's RLOC CompCode Ordinal Code EER (%) 0.72 1.7 0.82 1.2 Accuracy (%) 98.98 97.97 98.77 98.47
(52) G.1.2 IITD Palmprint Database
(53) The IITD touchless palmprint database provides contactless palmprint images from the right and left hands of 230 subjects. There are more than five samples for right hand or left hand images in this database, which also provides 150×150 pixels segmented palmprint images. All the 1,376 right hand palmprint images are employed for the experiments.
(54) The IITD touchless palmprint database provides contactless palmprint images from the right and left hands of 230 subjects. There are more than five samples for right hand or left hand images in this database which also provides 150×150 pixels segmented palmprint images. All the 1,376 right hand palmprint images are employed for the experiments.
(55) In the experiment, for each subject, one image is employed for the test and the rest of the images are employed for training and compute the average performance. The ROC, EER and CMC are used to ascertain the performance. Table 2 presents the EER and average rank-one recognition accuracy on this database using different methods.
(56) TABLE-US-00002 TABLE 2 The EER and average rank-one recognition accuracy from different methods on IITD Palmprint Database The present Method invention's RLOC CompCode Ordinal Code EER (%) 0.22 0.64 0.68 0.33 Accuracy (%) 100 99.77 99.21 99.77
(57) G.1.3 PolyU Palmprint Database
(58) The PolyU palmprint database contains 7,752 palmprint images from 386 different palms. These images are automatically segmented to 128×128 size. In this database, there are several images, which are poorly aligned due to their rotational variation. In the experiments, the same protocol as reported in [4] is used. Only the first sample of each individual is used to construct the training set. The training set is enlarged by rotating each image in training set 9, 6, 3, −3, −6, −9 degrees respectively. Consequently, there are a total of seven training samples for each subject.
(59) TABLE-US-00003 TABLE 3 The EER and rank-one recognition accuracy from different methods on PolyU Palmprint Database The present Method invention's RLOC CompCode Ordinal Code EER (%) 0.033 0.089 0.076 0.038 Accuracy (%) 100 99.95 99.76 100
(60) The achieved results from the method of the present invention and the ordinal code are better than the other two methods. In the PolyU Paimprint database, all the images are well illuminated and acquired under special contact based imaging device. It may therefore be noted that under such illumination conditions, what the ordinal code method represents is a special case in accordance to an embodiment of the present invention. However when the illumination is poor (say uncontrolled or ambient), the performance from ordinal code will significantly deteriorate. The results on the other three palmprint databases in this document (presented in Sections G.1.1-2 and G.1.4) also validate this argument.
(61) G.1.4 CASIA Palmprint Database
(62) The CASIA palmprint database contains 5,239 palmprint images from 301 individuals. It is the largest publicly available database in terms of the number of individuals. In this database, the individual “101” is the same as the individual “19” and therefore these two classes were therefore merged into one class. The 11.sup.th image from the left hand of individual “270” is also misplaced sample posted in the tight hand, The 3.sup.rd image from left hand of individual “76” is a distorted sample whose quality is very poor. These two samples can also be automatically detected by a palmprint segmentation program. These two images are eliminated in the experiment. Therefore, all experiments with this database employed 5,237 images belonging to 600 different palms.
(63) The resulting images in the database to 128×128 pixels are segmented and scaled. The segmentation algorithm is based on the method reported in [11]. In the experiment, the total number of matches is 13,692,466, which includes 20,567 genuine and 13,689,899 imposter matches.
(64) TABLE-US-00004 TABLE 4 Comparative results using (EER) CASIA Palmprint Database The present Method invention's RLOC CompCode Ordinal Code EER (%) 0.53 1.0 0.76 0.79
(65) G.1.5 Computational Requirement and Discussion
(66) Table 5 lists the computational time for the method provided by the present invention, RLOC [4], Competitive Code [6] and Ordinal Code [7]. The feature extraction speed of the method provided by the present invention is much faster than the Competitive Code, Ordinal Code and a little slower than RLOC. However, the matching speed of the method provided by the present invention is among the fastest while RLOC is much slower. Considering the feature extraction time and matching time, it can be concluded that the present invention also has a speed advantages.
(67) TABLE-US-00005 TABLE 5 Comparative computational time requirements Method Feature Extraction Matching The present 1.1 0.054 invention's RLOC 0.13 1.2 Competitive Code 4.0 0.054 Ordinal Code 3.2 0.054 Note: The experimental environment is: Microsoft ® Windows ® 8 Professional, Intel ® Core ™ i5-3210M CPU@2.50 GHz, 8G RAM, VS 2010.
(68) The Ordinal code [8] can be considered as a special case of the developed feature of the present invention. For ordinal code, it uses three 2D Gaussian filters to perform convolution with the image. The difference is that the ordinal code assigns different values for the weight W.sub.i,1 and W.sub.i,2 in the Equation 8 while the developed feature of the present invention sets all the weights as one, which results in δ from Equation 8. As a result, the ordinal code output is not as orthogonal to the palmprint surface as from the method of the present invention. Thus, according to the earlier analysis, the recovery using ordinal code will be sensitive to the illumination changes. In the experiment on the PolyU Palmprint Database, all the images in this database are well illuminated, i.e., the illumination direction is orthogonal to palmprint surface. According to Equation 7, the recovered feature is determined by illumination direction and 3D shape information δ; if illumination is orthogonal to palmprint surface, the recovered feature will be reliable. This is the reason why the method of the present invention achieves similar performance on the PolyU Palmprint Database as from the ordinal code. it may be noted that the ordinal measure [8] uses three times larger template size than the method of the present invention, however, the method provided by the present invention outperforms such ordinal measure.
(69) G.2 Experimental Results from Face Recognition Applications
(70) In this part of experiments, Extended Yale Face Database B [3] is employed to verify the effectiveness of the present invention for the face recognition. These experiments are intended to evaluate the effectiveness of the feature of the present invention to support.sup.6 the arguments that the feature of the present invention is indeed describing the 3D shape information, which is insensitive to illumination changes. The Extended Yale Face Database B [3] is chosen to do identification primarily for the three reasons:
(71) 1) For face data, the surface of face is almost flatten similar to the palmprint surface. Therefore, the assumption of illumination directions made during the imaging (in
The Extended Yale Face Database B contains 38 subjects and each subject is imaged with 64 illumination conditions. The cropped faces with the resolution of 168×192 are also provided. In the experiment, the most neutral light sources (A+000E+00) images are used as the gallery, and all the other frontal images are used as probes (in summary, 38 images constituted the training set and 2,376 images are employed for the testing). This database is used to evaluate the identification performance. .sup.6 It may be noted that the identification considers relative matching score but not absolute matching score. In identification, the intra-class objects are expected to be more similar in shape information than inter-class ones. Since the feature of the present invention is encoding shape information, it is expected to have relatively (not absolutely because it cannot be assured the exact number of unreliable codes on certain matching since it depends on the feature extractor and whether the reliable constrain is satisfied as mentioned in previous part) more reliable codes than inter-class ones. The state-of-the-art performance from this experiment also supports above arguments.
(72) No additional filter is designed according to the geometrical properties of the face surface but simply the same division strategy is used as for the palmprint. Besides, the denoising matching strategy is not employed to underline the robustness of the feature of the present invention to the extremely varying illumination conditions. Therefore in the matching stage, the distance computations in Equation (22) is replaced by:
Distance(F.sub.T,F.sub.S)=Dis(F.sub.T,F.sub.S) (19)
The rank-one recognition accuracy is 99.3%. Table 5 summarizes the comparative results with another two state-of-art methods on this database. These results demonstrate that the developed feature of the present invention is also robust to the illumination changes and can validate the presented argument.
(73) TABLE-US-00006 TABLE 6 Identification Rate Comparison with the State-of-Art Methods for Extended Yale Face Database B The present Method invention's PP + LTP/DT [15] G-LDP [16] Rank-one rate (%) 99.3 99.0 97.9
(74)
(75) The experiments also achieved similar outperforming results while matching Iris images (publicly available IUD Iris images Database) and Finger Knuckle images (using publicly available PolyU Contactless Finger Knuckle Images Database).
(76) G.3 Experimental Results from Finger Knuckle Experiments
(77) In this part of experiments, finger knuckle images from 501 subjects are employed to comparatively ascertain the performance. The experimental results using receiver operating characteristics are illustrated in
H. CONCLUSIONS
(78) The present invention provides a new method of contactless biometrics identification using 2D images. In accordance to one embodiment, a new feature, which is based on ordinal measure is developed. This feature recovers 3D shape information of the surface while it is extracted from pixel level information in the contactless images. The feature extraction and matching is very efficient and the implementation is simple, which emphasizes on its practicality. The template size from this feature is also very small while it achieves excellent performances on multiple biometrics databases.
(79) The developed feature is suitable to be integrated with other features to further improve the matching performance due to two key reasons: 1) it has lower storage requirements while being efficient to recover/extract and match, and most importantly its effectiveness in achieving accurate performance, 2) most of the 2D features mentioned in previous sections extract the texture information while the developed feature of the present invention recovers 3D shape information, which means it is likely to have less redundancy with other features.
(80) The prototype of hand dorsal image based system using the invented feature achieves excellent performance on finger knuckle biometric matching. The present invention has successfully developed joint add-on system which can be integrated with existing slap fingerprint devices, or work as standalone, to reveal additional matches from knuckle biometrics for enhancing accuracy from fingerprint based matches.
(81) The embodiments disclosed herein may be implemented using general purpose or specialized computing devices, computer processors, or electronic circuitries including but not limited to digital signal processors (DSP), application specific integrated circuits (ASIC), field programmable gate arrays (FPGA), and other programmable logic devices configured or programmed according to the teachings of the present disclosure. Computer instructions or software codes running in the general purpose or specialized computing devices, computer processors, or programmable logic devices can readily be prepared by practitioners skilled in the software or electronic art based on the teachings of the present disclosure.
(82) In some embodiments, the present invention includes computer storage media having computer instructions or software codes stored therein which can be used to program computers or microprocessors to perform any of the processes of the present invention. The storage media can include, but are not limited to, floppy disks, optical discs, Blu-ray Disc, DVD, CD-ROMs, and magneto-optical disks, ROMs, RAMs, flash memory devices, or any type of media or devices suitable for storing instructions, codes, and/or data.
(83) The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art.
(84) The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. it is intended that the scope of the invention be defined by the following claims and their equivalence.