METHOD FOR DETECTING FORGERY IN FACIAL RECOGNITION AND COMPUTER READABLE MEDIUM AND VEHICLE FOR PERFORMING SUCH METHOD
20230125437 · 2023-04-27
Inventors
Cpc classification
B60W30/00
PERFORMING OPERATIONS; TRANSPORTING
G06V40/171
PHYSICS
International classification
B60W30/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method for detecting forgery in facial recognition is disclosed that comprises selecting (110) an illumination pattern including at least three illumination intensities; controlling (120) a light-emitting device (220) to emit light pulses, according to the illumination pattern; capturing (130) with a camera (210) a respective image of an eye region (15) of a face (10) during each emitted light pulse; determining (140) whether a glint (20) is present in the eye region (15) of each captured image; measuring (150) a glint intensity of each determined glint (20); extracting (160) a respective numerical feature for each captured image, the numerical feature representing a curve of all measured glint intensities at the glint intensity of the respective image; and outputting (180) a signal that the face (10) is forged, if one or more of the extracted numerical features does not correspond to a reference numerical feature. Furthermore, a computer-readable medium storing instructions to perform the method, and a vehicle (1) comprising a processor (250) capable of performing such method.
Claims
1. A method for detecting forgery in facial recognition, the method comprising: selecting an illumination pattern including at least three illumination intensities; controlling a light-emitting device to emit light pulses, according to the illumination pattern; capturing with a camera a respective image of an eye region of a face during each emitted light pulse; determining whether a glint is present in the eye region of each captured image; measuring a glint intensity of each determined glint; extracting a respective numerical feature for each captured image, the numerical feature representing a curve of all measured glint intensities at the glint intensity of the respective image; and outputting a signal that the face is forged, if one or more of the extracted numerical features does not correspond to a reference numerical feature.
2. The method according to claim 1, wherein extracting the numerical feature comprises: determining an intensity level of the glint in each captured image, or determining a normalised intensity level of the glint in each captured image; and computing a gradient of the curve defined by the determined intensity levels or the determined normalised intensity levels, or computing a curvature of the curve defined by the determined intensity levels or the determined normalised intensity levels.
3. The method according to claim 1, further comprising: comparing the numerical feature with the reference numerical feature, wherein the reference numerical feature is associated with the selected illumination pattern.
4. The method according to claim 3, wherein the comparing is performed by a machine learning process trained with a plurality of reference numerical features and/or trained with a plurality of illumination patterns.
5. The method according to claim 3, wherein the comparing comprises comparing a difference between the extracted numerical feature and the reference numerical feature with a threshold, and determining that the extracted numerical feature does not correspond to the reference numerical feature, if the difference is greater than the threshold.
6. The method according to claim 1, wherein the illumination pattern defines an arbitrary level of illumination intensity for each light pulse.
7. The method according to claim 1, further comprising: illuminating the face with a further light-emitting device emitting a diffused light avoiding the generation of a glint in the eye region.
8. The method according to claim 1, further comprising: estimating a gaze direction of the eye in the eye region of at least one of the captured images, wherein the outputting of the signal that the face is forged comprises outputting the signal, if one or more of the estimated gaze direction does not correspond to an expected gaze direction.
9. The method according to claim 1, wherein the light-emitting device emits infrared light.
10. A computer-readable medium configured to store executable instructions that, when executed by a processor, cause the processor to perform the method according to claim 1.
11. A vehicle comprising: a camera; at least one light emitting device; a storage device storing a plurality of illumination patterns including at least three illumination intensities; and a processor configured to perform the method according to claim 1.
12. The vehicle according to claim 11, further comprising: a security system configured to deactivate a vehicle component, if the processor outputs the signal that the face is forged.
13. The vehicle according to claim 11, further comprising: a driver assistance system including the camera and configured to observe a driver of the vehicle.
14. The vehicle according to claim 11, further comprising: a display, wherein the processor is configured to use an illumination of the display as a further light-emitting device.
Description
[0039] Preferred embodiments of the invention are now explained in greater detail with reference to the enclosed schematic drawings, in which
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047] The face is illuminated by at least one light-emitting device, wherein one light-emitting device forms a reflection 20 in the eye 17, which can be seen as a bright circular spot 20 near the pupil of the eye 17. This reflection 20 is also referred to as a glint 20. Such glint 20 is mainly present, if a spot-like light source emits light towards the cornea of the eye 17 and forms a corresponding reflection on the cornea. The more diffused the light illuminating the face 10 is, the less present or visible will be the glint 20. The present disclosure is directed to the generation of a plurality of glints 20 as will be outlined in more detail with respect to
[0048]
[0049] In step 130, a camera 210 (
[0050] Thereafter, a glint intensity of each determined glint 20 can be measured in step 150. As an example only,
[0051] Referring back to the method illustrated in
[0052] If one or more of the extracted numerical features for all images, i.e. for all light pulses and measured glint intensities, does not correspond to a references numerical feature, a signal is output that the face 10 is forged in step 180 (
[0053]
[0054]
[0055] Alternatively or additionally, a normalized intensity level of a glint 20 can be determined in step 164 for each captured image. Such normalization may be achieved in various ways, such as with respect to a maximum illumination of the face 10 or eye region 15 at a maximum illumination intensity light pulse, or a corresponding minimum illumination, or with respect to an average illumination of the face or the eye region 15 or any other portion of the image at a time no light pulse is emitted. In
[0056] Referring back to
[0057] While conventional anti-spoofing measures may compare a difference between a measured light intensity and an expected light intensity with a threshold, such comparison may lead to false positives, for example, for the first two measuring points in
[0058] Turning to
[0059] The comparing of the numerical features can further comprise comparing a difference between the extracted numerical feature and the reference numerical feature with a threshold in step 172. It is then determined that the extracted numerical feature does not correspond to the reference numerical feature, if the difference is greater than the threshold, and the signal is output in step 180.
[0060]
[0061] The processor 250 can control the light-emitting device 220 to emit light pulses, according to the selected illumination pattern, while the camera 210 captures images of the eye region 15 of the driver 5. The light-emitting device 220 can be an infrared (IR) light emitting device, such as an IR LED, in order to generate the glint 20. Of course, the light-emitting device 220 can emit visible light instead of or additionally to the IR light. The processor 250 can further perform the method steps 140 to 180, in order to determine whether the face 10 captured by the camera 210 is forged or not.
[0062] The vehicle 1 can further comprise a security system 260 configured to deactivate a vehicle component 270, if the processor 250 outputs the signal that the face 10 is forged. For instance, the processor 250 can prevent the engine from starting, if a forged face 10 is detected.
[0063] Furthermore, a display 225 may be installed in the vehicle 1, which, on the one hand, can be used as a further light-emitting device providing diffused light for illuminating the face 10 of the driver 5. On the other hand, the display 225 can be used to present instructions to the driver 5 on how to use the anti-spoofing system, i.e. the function of the processor 250 when performing the method steps 110 to 180.
[0064] In a specific example, the processor 250 can be configured to display instructions on the display 225 to the driver 5 to gaze in a certain direction. In a further method steps 190 (
[0065] The vehicle 1 may also be equipped with a driver assistance system 265, which includes a camera 210 and is configured to observe the driver 5 of the vehicle 1. Such driver assistance system 265 may be configured to control the gaze of the driver 5 or estimate tiredness of the driver 5 by processing images captured from the driver 5, particularly the face 10 of the driver 5. The camera 210 of the driver assistance system 265 as well as an optional display 225 and/or any processing means, such as processor 250) of the driver assistance system 265 can be employed for the disclosed method, so that no redundant devices have to be installed in the vehicle 1.
[0066]
[0067] The above description of the drawings is to be understood as providing only exemplary embodiments of the present invention and shall not limit the invention to these particular embodiments.