Image acquisition device and method for determining a focus position based on sharpness
10334152 ยท 2019-06-25
Assignee
Inventors
Cpc classification
H04N23/673
ELECTRICITY
G02B7/36
PHYSICS
H04N23/611
ELECTRICITY
International classification
G03B13/00
PHYSICS
Abstract
A method for acquiring an image comprises acquiring a first image frame including a region containing a subject at a first focus position; determining a first sharpness of the subject within the first image frame; identifying an imaged subject size within the first image frame; determining a second focus position based on the imaged subject size; acquiring a second image frame at the second focus position; and determining a second sharpness of the subject within the second image frame. A sharpness threshold is determined as a function of image acquisition parameters for the first and/or second image frame. Responsive to the second sharpness not exceeding the first sharpness and the sharpness threshold, camera motion parameters and/or subject motion parameters for the second image frame are determined before performing a focus sweep to determine an optimal focus position for the subject.
Claims
1. A method for acquiring an image comprising: acquiring a sequence of image frames of a given scene; detecting within each image frame any region containing a subject with a reference subject size; recording an identifier of any region containing such a subject on which auto-focus might be based; determining a number of variations of subject identifiers for a given number of frames preceding a given frame in said sequence; responsive to said number of variations exceeding a threshold, disabling subject based auto-focus for subsequent frames until at least a condition for a given scene is met; responsive to said number of variations not exceeding a threshold in a given frame containing at least one subject, utilising subject based auto-focus for acquisition of at least a subsequent frame in said sequence; and otherwise using non-subject based auto-focus for at least a subsequent frame in said sequence.
2. A method according to claim 1 wherein said number of variations is defined as a sum of changes of identifiers of subjects on which auto-focus might be based in previous pairs of successive frames of a given scene.
3. A method according to claim 1, wherein determining a change of identifiers between a successive pair of frames is modified by at least one tuning parameter and a maximum number of detected subjects in any image frame of a scene.
4. A method according to claim 1 wherein a face detection based auto-focus comprises: checking if a lens actuator setting for acquiring a given image frame provides maximum sharpness for a region of said image frame including a detected subject on which auto-focus might be based; and responsive to said sharpness not being maximal: determining a lens actuator setting providing a maximum sharpness for the region of said image frame including said detected subject; determining a lens displacement corresponding to said lens actuator setting; calculating a distance to said subject based on said lens displacement; determining a dimension of said subject as a function of said distance to said subject, said imaged subject size and a focal length of a lens assembly with which said image frame was acquired; and tracking said subject through at least one subsequent image frame of said scene including auto-focussing on said subject using said determined dimension of said subject; and responsive to said sharpness being maximal, tracking said subject through at least one subsequent image frame of said scene including auto-focussing on said subject using said reference subject size.
5. A method according to claim 1 wherein said subject comprises a face and wherein said reference subject size is a distance between eye regions of said face.
6. A method according to claim 1 wherein said condition for a given scene comprises detecting a change of scene.
7. A method according to claim 6 wherein a change of scene is based on an assessment of one or more of: camera motion and subject motion.
8. An image acquisition device comprising an auto-focus module arranged to acquire image acquisition, subject motion and camera motion parameters for an acquired image frame; and to perform the method of claim 1.
9. A computer program product comprising a non-transitory computer readable medium on which computer executable instructions are stored which when executed on an image acquisition device are arranged to perform the method of claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) An embodiment of the invention will now be described by way of example, with reference to the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
DESCRIPTION OF THE EMBODIMENT
(8) Referring now to
(9) It is known to extract either intra-frame or frame-to-frame camera motion parameters (CMP) for example, to provide image stabilisation within a video sequence, as disclosed in U.S. application Ser. No. 15/048,149 filed 19 Feb. 2016 (Ref: FN-474) the disclosure of which is incorporated herein by reference. CMP can comprise a simple measure of camera inertial measurement unit (IMU) outputs from either accelerometers and/or gyroscopes indicating translational and rotational movement of the camera within a frame or from frame-to-frame. Alternatively, a displacement map V[ ], such as shown in
(10) In any case, the AF system 38 acquires such CMP information from either from an IMU and/or any other processing modules (not shown) within the image acquisition device 30 to indicate intra-frame or frame-to-frame camera motion.
(11) As well as camera motion, it is known for an acquisition device to detect and track a subject, for example, a detected face, through a sequence of acquired images, for example, as disclosed in WO2008/018887 (Ref: FN-143) and related cases, the disclosure of which is incorporated herein by reference. Thus, the AF system 38 of the embodiment can obtain either intra-frame or frame-to-frame subject motion parameters (SMP) i.e. x, y displacement parameters for a region of interest within an image comprising a subject being tracked either within frame N or between frame N and a preceding (or succeeding) frame N1 from any processing modules (not shown) within the image acquisition device 30 performing such analysis.
(12) Finally, the AF system 38 can extract image acquisition parameters (IAP) such as sensor gain, exposure time, light level, lens position (subject distance), etc. for any given frame.
(13) Referring now to
(14) The method of
(15) If the camera has been operating in FDAF mode, then a subject will have been tracked from a previous frame, otherwise, the method is checking to determine if a new subject has been detected and if this might be used for focussing. If a focus subject such as a face region is detected, step 42, the method continues, but if not, the next frame is acquired using an alternative AF mode and reverts to step 40.
(16) At step 44, the method extracts sharpness thresholds as a function of IAP including distance to the detected subject; and ambient light level. Thus for example, for a given distance to a subject and for a given light level, the method establishes a threshold level of sharpness against which a next acquired image will be assessed.
(17)
(18)
(19) It will be appreciated from
(20) Multiple sets of curves such as shown in
(21) It will therefore be seen that when imaging a face at a given distance, light level, exposure time and given ISO/gain level, for a given camera, there is an expectation of the maximum sharpness which might be attained for a face region including a real face. In a simple implementation, this might comprise a simple average of the level of one of curves 50 and one of curves 60 for the subject distance and light level respectively at a given ISO/gain level and exposure time. Given the maximum value based on IAP used to acquire the present image, a sharpness threshold ThrO can be calculated as a fraction of the maximum value, say 60%.
(22) It will be appreciated that sharpness functions such as those shown in each of
(23) In any case the sharpness threshold ThrO for the given image can be a function of IAP including: light level, distance to the subject, ISO/gain, exposure time.
(24) As well as calculating a sharpness threshold, in step 44, a sharpness map OldSh, is calculated for the subject region detected within the image acquired at step 40. In a simple implementation, OldSh can comprise a small array or look-up table (LUT) with sharpness measurements for the subject region.
(25) At step 46, based on the estimated distance to the subject, a new assumed-optimal lens position can be determined for example, as explained in PCT Application PCT/EP2015/076881 (Ref: FN-399), the disclosure of which is incorporated herein by reference. So for example, if a region of interest comprising a face is being tracked, then facial anthropometric data such as eye separation distance can be employed to determine, from the size of the imaged face and IAP, the distance of the subject from the image acquisition device.
(26) The lens 12 can then be driven to the focus position (if a change is required) for this distance and a new frame acquired at that position. A sharpness map, NewSh, corresponding to the sharpness map, OldSh is then calculated for that frame.
(27) At step 48, the new map NewSh for the frame is compared to the threshold level calculated at step 44 and compared to the map OldSh for the previous frame acquired at step 40. If the sharpness values for the new map exceed the threshold ThrO and its sharpness improves over the old map, then the focus position used at step 46 is regarded as valid.
(28) There are of course many techniques for performing the comparisons of step 48, for example, that all or a given proportion or number of the sharpness values of NewSh must exceed ThrO and OldSh; or that the aggregate sharpness of NewSh must exceed that of OldSh and that a threshold new of points in NewSh must exceed ThrO.
(29) In this case, the method proceeds to step 50 where a counter TCounter is incremented to signal how long a subject has been successfully tracked and the method continues to acquire a next frame at step 40.
(30) On the other hand, if the adjustment at step 46 fails to improve the sharpness of the tracked subject, then the next step 52 is to assess camera motion (CMP) and subject motion (SMP).
(31) If significant motion is measured in the scene (CMP and SMP are high), the loss of sharpness is attributed to motion rather than loss of focus, and the method continues by waiting for motion to stabilise and acquiring a next frame at step 40 (but without incrementing TCounter) before changing focus position.
(32) Note that the thresholds for CMP and SMP can depend on the exposure time. When exposure time is long, smaller subject/camera motion levels are accepted. When the exposure time is very short (outdoor sunny scenarios), camera motion (hand held) can be neglected and larger subject motion (natural still motion) can be neglectedonly significant subject motion (fast sport actions) should be taken into account i.e. the SMP threshold is inversely proportional to exposure time.
(33) On the other hand, if in spite of CMP and SMP being lower than their respective thresholds, sharpness has dropped, then the method proceeds to step 54 where a false counter FCounter is incremented.
(34) At the next step 56, the method tests whether the number of false adjustments performed at step 46, in spite of low CMP/SMP has exceeded a threshold i.e. FCounter>Th2. If at this stage, TCounter is below a threshold, i.e. TCounter<Th1, the method infers that the camera has not been able to track a number (Th2) of subjects for sufficiently long indicating that the camera module may be malfunctioningat least in respect of being able to perform FDAF. Thus, at step 58, the camera module is marked accordingly, and at step 60, FDAF can be permanently disabled and the camera switches to an alternative AF mode such as CDAF i.e. the method of
(35) On the other hand, if the test at step 56 fails, it may mean that the subject being imaged does not in fact exhibit the assumed dimensions on which step 46 is based initially. For example, if the subject is a face and the face detected is a baby face or a large printed face, then the distance between the subject's eyes, a common anthropometric measure used to determine subject distance may not be the nominal 65-70 mm and so the focus position chosen based on this measurement would be incorrectthus a false face is indicated at step 62. Similarly, if the subject being tracked were a car, then if a printed image of a car either in a magazine or on a billboard were being acquired by the image acquisition device 30, it would not exhibit the nominal dimensions associated with a car.
(36) If a potentially valid updated nominal distance is not available, for example, if a focus sweep has already been performed for a given subject being tracked and should not be re-executed, the process stops without performing a (further) focus sweep and an alternative AF method is set for as long as the current subject region is being tracked (as well as resetting TCounter).
(37) Otherwise, in step 68, the nominal anthropometric measure or dimension being used for the subject can be re-calculated as disclosed in PCT Application No. PCT/EP2015/076881 (Ref: FN-399-PCT), using a focus sweep to determine the focus position at which the subject exhibits maximum contrast and then re-calculating the nominal subject measurement at this distance. Note that this can be the only circumstance where a focus sweep is required. Indeed, if the method were not to attempt to track false faces (or equivalents), then focus sweep would not be required.
(38) Where a focus sweep has been performed in order to determine an optimal focus position for a false subject, such as a printed image, the sharpness thresholds calculated and used in steps 44-48 can be adjusted accordingly i.e. as indicated in
(39) Once an optimum focus position associated with an updated nominal size for the subject has been acquired, the method can now return to step 40 to continue as described above and to acquire a subsequent image at that focus position.
(40) Now taking a specific example using the above method, at a given frame #1, a subject such as a face is detected/tracked. On this first frame, the AF position is as before, and the sharpness value (OldSh) is measured for the face region. Based on the distance between the eyes (measured in pixels on the detected face) the distance to the subject is estimated and then a new corresponding lens position is computed. The lens is moved to a new lens position based on this data from step. A new sharpness (NewSh) is computed for the face region in frame #2 acquired at the new lens position. The new sharpness from frame #2 is validated by comparing respective sharpness measurements for the face region in frame #1 and #2, when CMP and SMP are below the acceptable thresholdsthe new sharpness from frame #2 should be higher or equal than the sharpness from frame #1. Based on these two measurements, the method decides if the focus position is good or not. If CMP and SMP are high, the method waits for stabilization.
(41) If the new sharpness for frame #2 is not higher or equal than the sharpness for frame #1 and if the new sharpness value is not above a given threshold, the focus position is regarded as not acceptable. There may be reasons for this failure: The detected face is not a real face (it could be a much smaller or a much larger printed face). In this case the method may need to switch to an alternative AF method e.g. CDAF; or The camera module has become bad (OTP data is invalid, start/stop DAC (digital-to-analog converter) codes for the lens actuator are not good) and the method should switch on an alternative AF method e.g. CDAF. If this failure is happening continuously (for more than Th2 different faces), then the module damage may be irreversible and the FDAF may be completely disabled.
(42) The method of
(43) Equally the sharpness threshold ThrO could be calculated based on the IAP for the frame acquired at step 46 rather than the frame acquired at step 40; or indeed as a function of the IAP of both of these and possibly other successively acquired frames.
(44) Also, information for more than one subject region within an image may be taken into account when assessing focus position.
(45) Referring now to
(46) In particular problems can arise when a false face is detected within an image of a scene. As mentioned above, a false face is one in which the distance between the eyes is either significantly larger or smaller than a reference adult eye distance (ED_REF) of between about 65-70 mm.
(47) Consider an image of a scene, captured with focus positioned at a relatively short distance (e.g. 15 cm) from the camera system and containing a small ID badge face having an eye distance 20 mm. A face detection auto-focus (FDAF) system such as disclosed in PCT Application PCT/EP2015/076881 (Ref: FN-399), particularly if it did not test for local sharpness on initially detecting a face might behave in the following manner:
(48) a. A face is detected and, based on an assumed standard eye distance, a focus lens position is computed. In the case of an ID badge captured at close range, the focus position could be incorrectly estimated close to infinity because the eye distance (20 mm) is much lower than 65-70 mm.
(49) b. The lens system physically moves the lens to the computed focus position. The resulting image acquired at that focus position would be highly defocused because the subject is actually positioned at a near macro (15 cm) and position, but the lens is focused near infinity. Because the image is highly defocused, it is expected the face detector is likely to lose the previously detected face. Without a detected face within a last detected image, the system may switch to PL-CAF to focus based on maximizing sharpness at one or more locations or regions of subsequently acquired images. At some stage during such acquisition of subsequent images, the image should become less blurred or fully even focused, so that the original face is redetected.
(50) The workflow could then start again from step (a) resulting in a (possibly infinite) loop.
(51) The same issues could arise in more complex scenarios with multiple false faces where the face on which focus tracking is based can jump between faces.
(52) Another possible cause for a tracked face to be lost is the face quality, where usually small faces (e.g. small faces from ID badges) are unclear and face detectors generate unstable detections.
(53) These conditions can induce a fast variation/switching of the tracked face (jumping from one face to another or rapidly redetecting one face) in a short amount of time, which in turn can induce a fast variation of the autofocus states (rapidly alternating between PL-CAF and FDAF or repeatedly triggering FDAF). This can create a strong lens wobble effect with focus shifting back and forwards between macro and infinity positions so degrading the user experience.
(54) The embodiment of
(55) When the method starts, each of these records is reset, and a sequence counter K is reset to 0. A frame (K) is then acquired at a given focus position, step 72. The tracked face identifier and number of faces for the frame K are initially set to zero/null, step 74. If FDAF has not been disabled for a given scene i.e. within the last K images, step 80, a face detector is then applied to the image, step 76, and this will return a number of faces within the image, CRT_NUM_FACES, as well as an identifier of the face which is to be tracked within the image, CRT_FACE_ID. In a simple implementation, CRT_FACE_ID can indicate the largest face detected within an image, the sharpest face within an image, a face which might previously have been selected by a user within a previous image, or indeed a face corresponding to the CRT_FACE_ID from a previous frameif this is available. As each new face is detected by the face detector, it allocates a new identifier for the detected face.
(56) Nonetheless, if after starting or after a new scene is detected, an acquired image does not include a face, NUM_FACES[K] and FACE_ID [K] entries will remain null.
(57) This information will eventually be used when determining the stability of a detected face as will be explained in more detail below, but essentially when a face is not present in a frame and face instability has not been detected for a frame, essentially focus remains based on PL-CAF for a subsequent frame, step 94, and the frame index is incremented, step 86, before continuing to the next frame.
(58) Eventually, if an acquired image includes at least one face, the record entries K in NUM_FACES[ ] and FACE_ID [ ] will be updated with non-null values, step 82, with the corresponding information for the current image. A measure of the stability of face detection, NUM_VARs is then calculated at step 84. This will be explained in more detail below, but if NUM_VARs is less than a threshold amount, THR, this indicates face detection has become (or is) stable for the scene. If so, the method can switch to (or continue to operate) using FDAF, step 85, before incrementing the variable K, step 86, and the process continues for the next acquired image, step 72. Note that as before this involves calculating the required lens position based on the assumed or calculated subject size as before and moving the lens to a new focus position.
(59) On the other hand, if at some stage after detecting a face in an image of a scene, NUM_VARS is greater than THR, this indicates face tracking has become unstable and so the camera switches to non-FDAF, i.e. platform based auto-focus, PL-CAF, basing focus on image contrast or sharpness or an actively measured subject distance and disables FDAF until the scene changes, step 88. Once the method has switched to PL-CAF and FDAF has been disabled for a scene, there is no need to continue to update the records NUM_FACES[ ], FACE_ID [ ] and sequence counter K for a scene until this changes, but in variants of the embodiment, this preclusion on attempting to switch back to FDAF until a scene has changed may be modified and it may be useful to continue to update the records NUM_FACES[ ], FACE_ID [ ] and sequence counter K for determining when to do so.
(60) Nonetheless, it will be seen that in the embodiment, if auto-focus switches to PL-CAF in response to face detection becoming unstable, the system will essentially continue to rely on this form of auto-focus until a scene being imaged changes, step 90. There are a number of techniques for determining if the scene has changed and one simply relies on camera motion parameters, such as the CMP described in the first embodiment. Thus if CMP parameters indicate that the image camera has moved above a given threshold amount, a decision that the scene has changed can be made. Alternatively or in addition, subject motion parameters, SMP, can be taken into account in determining if the scene has changed. In any case, if the scene has changed, the NUM_FACES[ ] and FACE_ID [ ] vectors as well as the sequence counter K are reset, step 70, as well as (provisionally) enabling FDAF, step 92, before acquiring the next image.
(61) In the embodiment, face detection, step 76, need not be executed after it has been determined that face detection for a scene has become unstable, however, it will be appreciated that face detection could be applied e.g. before step 80, and its results used for other purposes, even if face detection for a scene had become unstable.
(62) Turning now to how NUM_VARS is computed:
(63) In the embodiment, NUM_VARS is defined as the number of face variations for a fixed number of previous frames defined as: .sub.i FACE_VAR_COND(FACE_IDS[i],FACE_IDS[i1])
(64) FACE_VAR_COND(CRT_ID, PREV_ID) is a variation condition for two face IDs defined as a Boolean (1 if TRUE, 0 if FALSE): |CRT_IDPREV_ID|>A*max(NUM_FACES[ ])+B.
(65) Let us consider some examples where tuning parameters A=1 and B=0; and THR=2.
Example 1
(66) A small printed face (ED=20 mm), which is always detected in the scene by the face detecting step 76.
(67) Frame 1: Face is detected=>CRT_FACE_ID=1, CRT_NUM_FACES=1 History is updated: FACE_IDS[0]=1, NUM_FACES[0]=1 FACE_IDS[ ]={1} NUM_FACES[ ]={1} Max(NUM_FACES[ ])=1 FACE_VAR_COND[0]=0 NUM_VARS=0<2=> do not disable FDAF The lens is moved to the focus position estimated by FDAF (this position is wrongly estimated because the reference eye distance (ED_REF) is 70 mm)
(68) Frame 2: The face is still detected (tracked)=>CRT_FACE_ID=1, CRT_NUM_FACES=1 History is updated: FACE_IDS[1]=1, NUM_FACES[1]=1 FACE_IDS[ ]={1, 1} NUM_FACES[ ]={1, 1} Max(NUM_FACES[ ])=1 FACE_VAR_COND[1]=[|FACE_IDS[1]FACE_IDS[0]|>1*1+0=[0>1]=0 NUM_VARS=FACE_VAR_COND[0]+FACE_VAR_COND[1]=0<2=> do not disable FDAF The correction sweep will start
(69) Frame 3: The face is still detected (tracked)=>CRT_FACE_ID=1, CRT_NUM_FACES=1 History is updated: FACE_IDS[2]=1, NUM_FACES[2]=1 FACE_IDS[ ]={1, 1, 1} NUM_FACES[ ]={1, 1, 1} Max(NUM_FACES[ ])=1 FACE_VAR_COND[2]=[|FACE_IDS[2]FACE_IDS[1]|>1*1+0]=[>1]=0 NUM_VARS=FACE_VAR_COND[0]+FACE_VAR_COND[1]+FACE_VAR_COND[2]=0<2=> do not disable FDAF Correction sweep is in progress
(70) Frame 10: The face is still detected (tracked)=>CRT_FACE_ID=1, CRT_NUM_FACES=1 History is updated: FACE_IDS[9]=1, NUM_FACES[9]=1 FACE_IDS[ ]={1, 1, 1, 1, 1, 1, 1, 1, 1, 1} NUM_FACES[ ]={1, 1, 1, 1, 1, 1, 1, 1, 1, 1} Max(NUM_FACES[ ])=1 FACE_VAR_COND[9]=[|FACE_IDS[9]FACE_IDS[8]|>1*1+0]=[0>1]=0 NUM_VARS=FACE_VAR_COND[0]+FACE_VAR_COND[1]+FACE_VAR_COND[2]+ . . . +FACE_VAR_COND[9]=0<2=> do not disable FDAF The correction sweep is done, the face is in focus and the eye distance (ED) is properly updated to ED=20 mm as disclosed in PCT Application PCT/EP2015/076881 (Ref: FN-399).
Example 2
(71) A small printed face in the scene (ED=20 mm), is lost after the lens is moved to the estimated position by FDAF and subsequently re-detected.
(72) Frame 1: Face is detected=>CRT_FACE_ID=1, CRT_NUM_FACES=1 History is updated: FACE_IDS[0]=1, NUM_FACES[0]=1 FACE_IDS[ ]={1} NUM_FACES[ ]={1} Max(NUM_FACES[ ])=1 FACE_VAR_COND[0]=0 NUM_VARS=0<2=> do not disable FDAF The lens is moved to the focus position estimated by FDAF (this position is wrongly estimated because the reference eye distance (ED_REF) is 70 mm)
(73) Frame 2: The face is lost (by the face detecting step 76) due to the poor image quality, before the correction sweep is done=>CRT_FACE_ID=0, CRT_NUM_FACES=0 History is updated: FACE_IDS[1]=0, NUM_FACES[1]=0 FACE_IDS[ ]={1, 0} NUM_FACES[ ]={1, 0} Max(NUM_FACES[ ])=1 FACE_VAR_COND[1]=[|FACE_IDS[1]FACE_IDS[0]|>1*1+0]=[1>1]=0 NUM_VARS=FACE_VAR_COND[0]+FACE_VAR_COND[1]=0<2=> do not disable FDAF Note that in the embodiment, after losing the face, PL-CAF will be triggered, step 94.
(74) Frame 3: The face is still lost=>CRT_FACE_ID=0, CRT_NUM_FACES=0 History is updated: FACE_IDS[2]=0, NUM_FACES[2]=0 FACE_IDS[ ]={1, 0, 0} NUM_FACES[ ]={1, 0, 0} Max(NUM_FACES[ ])=1 FACE_VAR_COND[2]=[|FACE_IDS[2]FACE_IDS[1]|>1*1+0]=[0>1]=0 NUM_VARS=FACE_VAR_COND[0]+FACE_VAR_COND[1]+FACE_VAR_COND[2]=0<2=> do not disable FDAF Again, having no face in the scene, PL-CAF will be triggered, step 94.
(75) Frame 6: The face is redetected. The face will receive a new ID obtained by incrementing the ID of the latest tracked face (which was 1)=>CRT_FACE_ID=2, CRT_NUM_FACES=1 History is updated: FACE_IDS[5]=2, NUM_FACES[5]=1 FACE_IDS[ ]={1, 0, 0, 0, 0, 2} NUM_FACES[ ]={1, 0, 0, 0, 0, 1} Max(NUM_FACES[ ])=1 FACE_VAR_COND[5]=[|FACE_IDS[5]FACE_IDS[4]|>1*1+0]=[2>1]=1 NUM_VARS=FACE_VAR_COND[0]+FACE_VAR_COND[1]+FACE_VAR_COND[2]+ . . . +FACE_VAR_COND[5]=1<2=> do not disable FDAF The lens is moved to the focus position estimated by FDAF (this position is wrongly estimated because the reference eye distance (ED_REF) is 70 mm)
(76) Frame 7: The face is lost (by the face detecting step 76) due to the poor image quality, before the correction sweep is done=>CRT_FACE_ID=0, CRT_NUM_FACES=0 History is updated: FACE_IDS[6]=0, NUM_FACES[6]=0 FACE_IDS[ ]={1, 0, 0, 0, 0, 2, 0} NUM_FACES[ ]={1, 0, 0, 0, 0, 1, 0} Max(NUM_FACES[ ])=1 FACE_VAR_COND[6]=[|FACE_IDS[6]FACE_IDS[5]|>1*1+0]=[2>1]=1 NUM_VARS=FACE_VAR_COND[0]+FACE_VAR_COND[1]+FACE_VAR_COND[2]+ . . . +FACE_VAR_COND[6]=2 (which is not lower that THR(2)=>Disable FDAF) Losing the face, PL-CAF will be triggered and FDAF disabled for the scene, step 88.
(77) It will be seen that using the above values for tuning parameters A and B and THR allows a detected face to be lost once and re-detected, but if it is subsequently lost, face detection is regarded as being unstable and is disabled for a scene. Clearly varying these parameters enables flexibility in the degree to which face detection can vary before it is disabled.
(78) As in the first embodiment, the disclosed techniques are not limited to a subject comprising a face and are equally applicable to attempting to track any subject with an assumed reference dimension.
(79) While the second embodiment has been described separately from the first, it will be appreciated that each of these could run in parallel on a given device. Then, if FDAF is disabled by either one for a given sceneit can supersede FDAF status determined by the other method. Thus, the methods can be regarded as complimentary, rather than mutually exclusive.
(80) As well as assisting in acquiring visible images, embodiments can be employed to acquire in-focus infra-red images of a human iris typically used for performing biometric identification and authentication of a device user.