Face detector training method, face detection method, and apparatuses
09836640 · 2017-12-05
Assignee
Inventors
Cpc classification
G06V10/467
PHYSICS
G06F18/2148
PHYSICS
International classification
Abstract
A face detector training method, a face detection method, and apparatuses are provided. In the present invention, during a training phase, a flexible block based local binary pattern feature and a corresponding second classifier are constructed, appropriate second classifiers are searched for to generate multiple first classifiers, and multiple layers of first classifiers that are obtained by using a cascading method form a final face detector; and during a detection phase, face detection is performed on a to-be-detected image by using a first classifier or a face detector that is learned during a training process, so that a face is differentiated from a non-face, and a face detection result is combined and output.
Claims
1. A face detector training method implemented by a computer having a processor, the method comprising: performing, by the processor, a training process by: collecting face and non-face images as a training image sample set; extracting a flexible block based local binary pattern (FBLBP) feature of the face and non-face images to form an FBLBP feature set; and using the FBLBP feature and a GentleBoost algorithm to perform training, to obtain a first classifier, wherein the first classifier comprises several optimal second classifiers, and wherein each optimal second classifier is obtained by training using the GentleBoost algorithm; repeating, by the processor, the training process to obtain multiple layers of first classifiers; and cascading, by the processor, the multiple layers of first classifiers to form a face detector.
2. The method according to claim 1, wherein collecting the face and the non-face images as the training image sample set and extracting the FBLBP feature of the face and the non-face images to form the FBLBP feature set comprises: constructing the FBLBP feature to represent co-occurrence information of a relative change of local grayscale of an image, wherein the FBLBP feature comprises several congruent rectangular blocks, a threshold, and a sign bit, wherein the several congruent rectangular blocks comprise one pivot block and at least one neighbor block, wherein the threshold is θ.sub.0 when the sign bit is −1, wherein the threshold is θ.sub.1 when the sign bit is 1, and wherein θ.sub.0 and θ.sub.1 are real numbers; and calculating a sketch value AvgInt.sub.pivotblock of the pivot block in the FBLBP feature and a sketch value AvgInt.sub.neighborblock.sub.
3. The method according to claim 1, wherein in using the FBLBP feature and the GentleBoost algorithm to perform training, to obtain the first classifier, wherein the first classifier comprises the several optimal second classifiers, and each optimal second classifier is obtained by training using the GentleBoost algorithm, a calculation formula (1) of a second classifier is as follows:
4. The method according to claim 3, wherein the method further comprises obtaining, by calculation, the weight ω.sub.i of the i.sup.th training image sample according to a formula (3) and a normalization formula (4), wherein the formula (3) is ω.sub.i=ω.sub.i×e.sup.−y.sup.
5. The method according to claim 3, wherein the method further comprises obtaining, by calculation, a sketch value x of an FBLBP feature of each second classifier according to a formula (5), wherein the formula (5) is as follows:
6. The method according to claim 1, wherein in using the FBLBP feature and the GentleBoost algorithm to perform training to obtain the first classifier, wherein the first classifier comprises the several optimal second classifiers, a process of calculating the optimal second classifiers comprises the following substeps: initially, the FBLBP feature comprises only one pivot block and one neighbor block, wherein an FBLBP feature set of the FBLBP feature that consists of the two rectangular blocks may be obtained using Brute force and traversal, for each FBLBP feature in the FBLBP feature set, calculating a sketch value of the FBLBP feature and an output value of a corresponding second classifier, substituting the obtained output value of the second classifier into a formula (6) to obtain an error J of the second classifier, and selecting a second classifier having a smallest value of the error J as the optimal second classifier, wherein the formula (6) is as follows:
F(x)=F(x)+f.sub.m(x), wherein F(x) is the first classifier, and F(x) is initialized to 0.
7. The method according to claim 6, further comprising: determining a threshold of the first classifier according to a formula (8), wherein the formula (8) is as follows:
8. The method according to claim 2, wherein for each of the neighbor blocks, calculating the differential result according to the sign bit, comparing and quantizing the differential result and the threshold, converting the binary number that consists of the quantization result of each of the neighbor blocks into the decimal number, and saving the decimal number to obtain the sketch value of the FBLBP feature comprises, for each neighbor block, when the sign bit is 1 and a difference between the sketch value AvgInt.sub.pivotblock of the pivot block and a sketch value AvgInt.sub.neighborblock.sub.
9. A face detector training apparatus, comprising: a memory comprising instructions; and a processor coupled to the memory, wherein the instructions cause the processor to be configured to: collect face and non-face images as a training image sample set; extract a flexible block based local binary pattern (FBLBP) feature of the face and non-face images to form an FBLBP feature set; perform training, using the FBLBP feature and using a GentleBoost algorithm, to obtain a first classifier, wherein the first classifier comprises several optimal second classifiers, and wherein each optimal second classifier is obtained by training using the GentleBoost algorithm; repeat a training process to obtain multiple layers of first classifiers; and cascade the multiple layers of first classifiers to form a face detector.
10. The apparatus according to claim 9, wherein the instructions further cause the processor to be configured to: construct the FBLBP feature to represent co-occurrence information of a relative change of local grayscale of an image, wherein the FBLBP feature comprises several congruent rectangular blocks, a threshold, and a sign bit, wherein the several congruent rectangular blocks comprise one pivot block and at least one neighbor block, and when the sign bit is −1, the threshold is θ.sub.0, and when the sign bit is 1, the threshold is θ.sub.1, wherein θ.sub.0 and θ.sub.1 are real numbers; calculate a sketch value AvgInt.sub.pivotblock of the pivot block in the FBLBP feature that is-a sketch value AvgInt.sub.neighborblock.sub.
11. The apparatus according to claim 9, wherein the instructions further cause the processor to be configured to use the FBLBP feature and the GentleBoost algorithm to perform training to obtain the first classifier, wherein the first classifier comprises several optimal second classifiers, and each optimal second classifier is obtained by training using the GentleBoost algorithm, wherein a calculation formula (1) of a second classifier is as follows:
12. The apparatus according to claim 11, wherein the instructions further cause the processor to be configured to obtain, by calculation, the weight ω.sub.i of the i.sup.th training image sample according to a formula (3) and a normalization formula (4), wherein the formula (3) is ω.sub.i=ω.sub.i×e.sup.−y.sup.
13. The apparatus according to claim 11, wherein the instructions further cause the processor to be configured to obtain, by calculation, a sketch value x of an FBLBP feature of each second classifier in the formula (1) according to a formula (5), wherein the formula (5) is as follows:
14. The apparatus according to claim 9, wherein the FBLBP feature comprises only one pivot block and one neighbor block, wherein an FBLBP feature set of the FBLBP feature that consists of the two rectangular blocks may be obtained using Brute force and traversal, for each FBLBP feature in the FBLBP feature set, wherein the first classifier is configured to: calculate a sketch value of the FBLBP feature and an output value of a corresponding second classifier; substitute the obtained output value of the second classifier into a formula (6) to obtain an error J of the second classifier; select a second classifier having a smallest value of the error J as the optimal second classifier, wherein the formula (6) is as follows:
F(x)=F(x)+f.sub.m(x), wherein F(x) is the first classifier, and F(x) is initialized to 0.
15. The apparatus according to claim 14, wherein the instructions further cause the processor to be configured to determine a threshold of the first classifier according to a formula (8), wherein the formula (8) is as follows:
16. The apparatus according to claim 10, wherein the instructions further cause the processor to be configured to, for each neighbor block, when the sign bit is 1 and a difference between the sketch value AvgInt.sub.pivotblock of the pivot block and a sketch value AvgInt.sub.neighborblock.sub.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
DESCRIPTION OF EMBODIMENTS
(18) To make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the following clearly describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. The described embodiments are a part rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.
(19)
(20) Step 1: Collect face and non-face images as a training sample set, and extract an FBLBP feature of the face and non-face images to form an FBLBP feature set.
(21) In this step, an image that includes a face and an image that does not include a face are collected from the Internet or by using a camera as the training sample set, where an image sample that includes a face may be referred to as a positive sample, and an image sample that does not include a face may be referred to as a negative sample. There may be multiple positive samples and negative samples. A specific quantity may be determined according to a specific requirement. After the training sample set is collected, an FBLBP feature is extracted from face and non-face images to form an FBLBP feature set, and an FBLBP weak classifier that can differentiate face and non-face images is constructed for each FBLBP feature in the FBLBP feature set.
(22) Step 2: Use the FBLBP feature and the GentleBoost algorithm to perform training to obtain a first classifier, where the first classifier includes several optimal second classifiers, and each optimal second classifier is obtained by training by using the GentleBoost algorithm.
(23) Generally, in a process of performing training on a feature by using the GentleBoost algorithm, a second classifier is obtained first, where the second classifier is a classifier of relatively low precision, then several optimal classifiers of relatively high precision are selected from multiple second classifiers, and training is continued on the several optimal classifiers to obtain a second classifier of relatively higher precision, that is, a strong classifier. Unless otherwise stated below, a first classifier is a strong classifier and a second classifier is a weak classifier.
(24) In this step, the GentleBoost algorithm is used to perform feature selection among all FBLBP features that are enumerated in the step 1 to obtain several optimal weak classifiers, and then accumulate the weak classifiers to obtain a strong classifier.
(25) Step 3: Repeat a training process from the step 1 to the step 2 to obtain multiple layers of first classifiers, and cascade the multiple layers of first classifiers to form a face detector.
(26) It should be noted that, for the first classifier at each layer, that is, the strong classifier, both a maximum number of weak classifiers and classification precision of the strong classifier may be set. If a requirement for the number of weak classifiers is not met and the precision does not meet a set requirement, the step 1 is performed again to retrain the samples. Otherwise, if the number of weak classifiers at each layer reaches a maximum value or after the precision meets a requirement, training of first classifiers at a next layer is performed (if it is the last layer, training is completed), so as to avoid that the number of weak classifiers is excessively large during training of last several layers and efficiency is affected. In addition, for a learning process of each FBLBP weak classifier in a strong classifier at each layer, a maximum number of neighbor blocks may be set in a process of adding a neighbor block. If the number of neighbor blocks of a single FBLBP weak classifier does not reach a set maximum value, and classification precision increases after an optimal neighbor block is added, the neighbor block is added; otherwise, adding the neighbor block is stopped and a next GentleBoost cycle is started.
(27) In a face detector training method provided by this embodiment of the present invention, a flexible block based local binary pattern feature and a corresponding weak classifier are constructed, appropriate weak classifiers are found to generate multiple strong classifiers, and multiple layers of strong classifiers that are obtained by using a cascading method form a final face detector. During this process, each FBLBP feature includes one pivot block and at least one neighbor block. The pivot block and the neighbor block are equal in size, and positions of each neighbor block and the pivot block are not strictly limited. Therefore, flexibility is high, robustness is improved, and meanwhile a false detection rate is reduced.
(28) Optionally, in the foregoing Embodiment 1 shown in
(29) Step 1.1: Construct the FBLBP feature to represent co-occurrence information of a relative change of local grayscale of an image, where the FBLBP feature includes several congruent rectangular blocks, a threshold, and a sign bit, where the several congruent rectangular blocks include one pivot block and at least one neighbor block; and when the sign bit is −1, the threshold is θ.sub.0, and when the sign bit is 1, the threshold is θ.sub.1, where θ.sub.0 and θ.sub.1 are real numbers.
(30) In this substep, the FBLBP feature, that is used to represent the co-occurrence information of the relative change of the local grayscale of the image is constructed. Each FBLBP feature includes several congruent rectangular blocks, a sign bit, and a threshold corresponding to the sigh bit. For example, when the sign bit is −1, the threshold is θ.sub.0; when the sign bit is 1, the threshold is θ.sub.1, where θ.sub.0 and θ.sub.1 are real numbers. The several rectangular blocks include one pivot block and at least one neighbor block. Referring to
(31) As shown in
(32) Step 1.2: Calculate a sketch value AvgInt.sub.pivotblock of the pivot block in the FBLBP feature and a sketch value AvgInt.sub.neighborblock.sub.
(33) An integral image technology is a matrix representation method that can describe global information. In this step, after the FBLBP feature is constructed, a sketch value of each FBLBP feature is calculated by using the integral image technology. An average grayscale value of each rectangular block in the FBLBP feature is calculated first and a sketch value of the corresponding rectangular block is represented by using the average grayscale value. For example, the sketch value AvgInt.sub.pivotblock of the pivot block is represented by the average grayscale of the area in which the pivot block is located, and the sketch value AvgInt.sub.neighborblock of the neighbor block is represented by the average grayscale of the area in which the neighbor block is located. After a sketch value of each rectangular block is calculated, for each neighbor block, a difference between the sketch value AvgInt.sub.pivotblock of the pivot block and the sketch value AvgInt.sub.neighborblock of the neighbor block is calculated, and a differential result is obtained according to the sign bit. Then the differential result and the threshold corresponding to the sign bit are compared, and then quantized according to a comparison result. For an FBLBP feature, after quantization results of all neighbor blocks of this FBLBP feature are obtained, a binary number that consists of all the quantization results is converted into a decimal number, and the decimal number is used as the sketch value of this FBLBP feature.
(34) Optionally, in the foregoing Embodiment 1 shown in
(35)
where, f.sub.m(x) is the m.sup.th second classifier, that is, the m.sup.th weak classifier, x is a sketch value of an FBLBP feature of the weak classifier, K is the number of neighbor blocks of this FBLBP feature, and a.sub.j is output of a weak classifier, where a.sub.j may be calculated according to a formula (2):
(36)
where, 0≦j≦.sup.2k−1 and j is an integer; y.sub.i is a class of the i.sup.th training sample, where when the training sample is a face image, a value of y.sub.i is 1, and when the training sample is not a face image, the value of y.sub.i is −1; δ( ) is a Kronecker function, where if x.sub.i=j is true, output is 1, and if x.sub.i=j is false, output is 0; i is the i.sup.th training image sample; and ω.sub.i is weight of the i.sup.th training image sample.
(37) Further, in the foregoing embodiment, the weight ω.sub.i of the i.sup.th training image sample is obtained, by calculation, according to a formula (3) and a normalization formula (4).
(38) The formula (3) is ω.sub.i=ω.sub.i×e.sup.−y.sup.
(39) The formula (4) is
(40)
(41) Further, in the foregoing embodiment, a sketch value x of an FBLBP feature of each second classifier in the formula (1) is obtained, by calculation, according to a formula (5). The formula (5) is as follows:
(42)
where K is the number of neighbor blocks;
if sign=1, FBLBP.sub.sign,k=δ((AvgInt.sub.pivotblock−AvgInt.sub.neighborblock.sub.
if sign=−1, FBLBP.sub.sign,k=δ((AvgInt.sub.pivotblock−AvgInt.sub.neighborblock.sub.
where δ( ) is a Kronecker function; when input is true, output is 1; otherwise, when input is false, output is 0.
(43) Optionally, in the foregoing Embodiment 1, in the step 2 of using the FBLBP feature and the GentleBoost algorithm to perform training so as to obtain a first classifier, where the first classifier consists of several optimal second classifiers, a process of calculating the optimal second classifiers includes the following substeps.
(44) Step 2.1: Initially, the FBLBP feature includes only one pivot block and one neighbor block, where an FBLBP feature set of the FBLBP feature that consists of the two rectangular blocks may be obtained by using Brute force and traversal; for each FBLBP feature in the FBLBP feature set, calculate a sketch value of the FBLBP feature and an output value of a corresponding second classifier; substitute the obtained output value of the second classifier into a formula (6) to obtain an error J of the second classifier; and select a second classifier having a smallest value of the error J as the optimal second classifier, where the formula (6) is as follows:
(45)
where, J is an optimized objective function, that is, a classification error, and N is the total number of training samples.
(46) In this step, an FBLBP feature that includes only one pivot block and one neighbor block is shown in
(47) Step 2.2: After the FBLBP feature that includes only one pivot block and one neighbor block is determined according to the step 2.1, increase the number of neighbor blocks for the FBLBP feature by traversing, in an image, rectangular blocks that have a same size as the pivot block; re-calculate a value x by substituting the sketch value of the FBLBP feature into the formula (5); calculate the error J by substituting the value x into the formula (6); if J further decreases, incorporate a new neighbor block into a current optimal second classifier; if J does not decrease, stop incorporation, directly output the current feature, and meanwhile update weight and enter a next GentleBoost cycle.
(48) A calculation formula (7) for combining an optimal second classifier of each GentleBoost cycle into the first classifier in the step 2 is as follows:
F(x)=F(x)+f.sub.m(x),
where, F(x) is the first classifier, that is, the strong classifier, and F(x) is initialized to 0.
(49) Further, a threshold of the first classifier may be determined according to a formula (8), where the formula (8) is as follows:
(50)
where i.sub.1 is the i.sub.1.sup.th training image sample that includes a face; i.sub.2 is the i.sub.2.sup.th training image sample that includes a non-face; and th is the threshold of the strong classifier.
(51) Optionally, in the foregoing Embodiment 1, in the step 1.2, for each of the neighbor blocks, calculating a differential result according to the sign bit, and comparing and quantizing the differential result and the threshold; and finally, converting a binary number that consists of quantization results of all the neighbor blocks into a decimal number, and saving the decimal number to obtain a sketch value of the FBLBP feature includes, for each neighbor block, when the sign bit is 1, if a difference between the sketch value AvgInt.sub.pivotblock of the pivot block and a sketch value AvgInt.sub.neighborblock.sub.
(52) The following uses
(53)
(54) Referring to
(55) When the sign bit is 1, differences between the sketch value AvgInt.sub.pivotblock of the pivot block and the sketch value AvgInt.sub.neighborblock of the neighbor block 1 to the neighbor block 8 are 2, 1, −3, −2, −10, −11, 3, and 1 in sequence. The differences between the sketch value AvgInt.sub.pivotblock of the pivot block and the sketch values AvgInt.sub.neighborblock.sub.
(56) When the sign bit is 0, differences between the sketch value AvgInt.sub.pivotblock of the pivot block and the sketch values AvgInt.sub.neighborblock.sub.
(57) In addition, based on the foregoing face detector training method, an embodiment of the present invention further provides a method for performing face detection by using a face detector that is obtained by using the face detector training method. In this method, face detection is performed on a to-be-detected image by using a first classifier or a face detector that is learned from a training process, so that a face is differentiated from a non-face. Referring to
(58) Step 4: Traverse a to-be-detected image to obtain a to-be-detected subimage set.
(59) In this step, a preset ratio may be set according to a different scenario. Preset ratios under different scenarios may be different. For example, it is assumed that the to-be-detected image is a 100×200 image; then, 24×24 pixels may be used as a reference to continuously enlarge a detection window (100×100 at maximum, which is equal to a smaller value between length and width of the to-be-detected image) according to a ratio of 1:1.1, an obtained window is used to traverse the to-be-detected image at a step of 2, and obtained sub-windows (24×24, 26×26, . . . , 100×100) of the to-be-detected image are all scaled to a square whose length and width are both 24 pixels, so as to obtain the to-be-detected subimage set. In another possible implementation manner, a preset ratio may also be another value, for example, 1:1.2, which is not limited in the present invention.
(60) Step 5: Bring each to-be-detected subimage in the to-be-detected subimage set into a face detector, and calculate, layer by layer, output of a first classifier at each layer in the face detector.
(61) Step 6: For a to-be-detected subimage, consider that the to-be-detected subimage is a non-face if output of a first classifier at any layer of the face detector is less than a threshold that is of the first classifier and is obtained by training, where only a to-be-detected subimage that passes determining of classifiers at all layers is considered as a face.
(62) In this step, a next layer is entered only after a to-be-detected subimage passes a strong classifier at a previous layer. As a result, a large number of non-target subimages to be detected, for example, a to-be-detected subimage that does not include a face, may be quickly excluded at the first several layers, thereby saving time to perform detection on a target to-be-detected subimage. Referring
(63) Step 7: Combine all detection results in the step 6 to output a position of a face in the to-be-detected image.
(64) A non-maximum suppression method may be used to implement combination of all detection results in the step 6.
(65) According to the face detection method provided by this embodiment of the present invention, face detection is performed on a to-be-detected image by using a strong classifier or a face detector that is learned in a training process, so that a face is differentiated from a non-face, and face detection results are combined and output. During this process, each FBLBP feature includes one pivot block and at least one neighbor block. The pivot block and the neighbor block are equal in size, and positions of each neighbor block and the pivot block are not strictly limited. Therefore, flexibility is high, robustness is improved, and meanwhile a false detection rate is reduced.
(66) Optionally, in the foregoing embodiment shown in
(67) Step 4.1: Traverse the to-be-detected image in a detection window with a preset length-width ratio.
(68) Step 4.2: Enlarge the detection window by the preset length-width ratio according to a preset step to traverse the to-be-detected image, and repeat this operation to traverse the to-be-detected image in a different detection window until length of the detection window is greater than or equal to length of the to-be-detected image or until width of the detection window is greater than or equal to width of the to-be-detected image.
(69) Step 4.3: Perform normalization processing on subimages that are obtained by traversing the to-be-detected image in the step 4.1 and the step 4.2, so that a length-width ratio of each subimage conforms to the preset length-width ratio, and use a set that consists of all normalized subimages as the to-be-detected subimage set.
(70) To clearly compare beneficial effects of a face detection method of the present invention with those of a face detection method in the prior art, the following uses Table 1 for a detailed comparison. Table 1 is an effect comparison table of a face detection method provided by the present invention and a face detection method in the prior art.
(71) TABLE-US-00001 TABLE 1 Face Detection Database (Number of Correct Detections/Number of False Detections) Test Dimension (Number Competitor in International of Images) China Competitor FBLBP Age - Baby (113) 104/12 111/5 111/1 Age - Children (105) 104/11 103/3 108/2 Age - Elder (107) 102/13 105/3 108/3 Backlight (145) 139/4 143/2 137/0 Complex Angle (116) 112/2 116/1 116/0 Exp - Frown (92) 92/1 92/0 91/0 Exp - Full smile1 (102) 102/1 102/0 102/0 Exp - Half smile (113) 113/0 110/0 113/0 Exp - Other (63) 63/0 62/1 63/0 Maxnum (4) 69/0 14/0 81/1 No face (32) 0/1 0/1 0/0 Normal (133) 151/1 151/0 151/1 Occa - Glasses (117) 112/5 121/0 121/0 Occa - Hair (55) 50/1 52/0 57/1 Occa - Hand (43) 25/6 23/3 28/1 Occa - Hat (103) 93/20 99/7 101/2 Occa - Sunglasses (99) 75/12 95/12 105/4 Pitch (56) 56/0 56/0 56/0 Race - Black (166) 159/8 176/3 171/2 Race - Brown (70) 70/3 70/0 70/0 Race - White (144) 144/1 143/1 144/1 Rotation - ( ) (111) 110/1 110/1 111/0 Yaw - 22.5 (130) 122/2 128/0 130/1 Total (2219) 2167/105 2182/43 2275/20
(72) It may be learned from Table 1 that, a true positive rate of an FBLBP feature-based face detection method in this embodiment of the present invention is higher than a true positive rate of detection by other international competitors and competitors in China, and a false detection rate is less than a false detection rate of detection by other international competitors and competitors in China.
(73) In addition, to further compare beneficial effects of the face detection method of the present invention and those of the face detection method in the prior art clearly, the following describes a test performed by using an international open FDDB. For a specific result, refer to
(74)
(75)
(76) With a face detector training apparatus provided by this embodiment of the present invention, a flexible block based local binary pattern feature and a corresponding weak classifier are constructed, appropriate weak classifiers are found to generate multiple strong classifiers, and multiple layers of strong classifiers that are obtained by using a cascading method form a final face detector. During this process, each FBLBP feature includes one pivot block and at least one neighbor block. The pivot block and the neighbor block are equal in size, and positions of each neighbor block and the pivot block are not strictly limited. Therefore, flexibility is high, robustness is improved, and meanwhile a false detection rate is reduced.
(77)
(78) Optionally, in an embodiment of the present invention, the first classifier module 12 is configured to use the FBLBP feature and the GentleBoost algorithm to perform training to obtain the first classifier, where the first classifier consists of several optimal second classifiers, and each optimal second classifier is obtained by training by using the GentleBoost algorithm, where a calculation formula (1) of a second classifier is as follows:
(79)
where f.sub.m(x) is the m.sup.th second classifier, x is a sketch value of an FBLBP feature of the second classifier, K is the number of neighbor blocks of the FBLBP feature, and a.sub.j is output of a second classifier, where a.sub.j is calculated according to a formula (2):
(80)
where 0≦j≦2.sup.K−1 and j is an integer; y.sub.i is a class of the i.sup.th training sample, where when the training sample is a face image, a value of y.sub.i is 1, and when the training sample is not a face image, the value of y.sub.i is −1; δ( ) is a Kronecker function, where if x.sub.i=j is true, output is 1, and if x.sub.i=j is false, output is 0; i is the i.sup.th training image sample; and ω.sub.i is weight of the i.sup.th training image sample.
(81) Optionally, in an embodiment of the present invention, the first classifier module 12 is configured to obtain, by calculation, the weight ω.sub.i of the ith training image sample according to a formula (3) and a normalization formula (4), where the formula (3) is ω.sub.i=ω.sub.i×e.sup.−y.sup.
(82)
(83) Optionally, in an embodiment of the present invention, the first classifier module 12 is configured to obtain, by calculation, a sketch value x of an FBLBP feature of each second classifier in the formula (1) according to a formula (5), where the formula (5) is as follows:
(84)
where K is the number of neighbor blocks;
if sign=1, FBLBP.sub.sign,k=δ((AvgInt.sub.pivotblock−AvgInt.sub.neighborblock.sub.
if sign=−1, FBLBP.sub.sign,k=δ((AvgInt.sub.pivotblock−AvgInt.sub.neighborblock.sub.
where δ( ) is a Kronecker function; when input is true, output is 1; and otherwise, when input is false, output is 0.
(85) Referring to
(86)
where J is an optimized objective function, that is, a weighted classification error, and N is the total number of training samples; a cyclic calculation module 122 configured to, after the initial calculation unit 121 determines the FBLBP feature that includes only one pivot block and one neighbor block, increase the number of neighbor blocks for the FBLBP feature by traversing, in an image, rectangular blocks that have a same size as the pivot block; re-calculate a value x by substituting the sketch value of the FBLBP feature into the formula (5); calculate the error J by substituting the value x into the formula (6); if J further decreases, incorporate a new neighbor block into a current optimal second classifier; and if J does not decrease, stop incorporation, directly output the current feature, and meanwhile update weight and enter a next GentleBoost cycle; and a combining unit 123 configured to combine an optimal second classifier that is obtained in each cycle by the cyclic calculation unit 122 into the first classifier according to a formula (7), where the formula (7) is as follows:
F(x)=F(x)+f.sub.m(x),
where F(x) is the first classifier, and F(x) is initialized to 0.
(87) Optionally, referring to
(88)
where i.sub.1 is an i.sub.1.sup.th training image sample that includes a face; i.sub.2 is an i.sub.2.sup.th training image sample that includes a non-face; and th is the threshold of the first classifier.
(89) Optionally, in an embodiment of the present invention, the calculating unit 112 is configured to, for each neighbor block, when the sign bit is 1, if a difference between the sketch value AvgInt.sub.pivotblock of the pivot block and a sketch value AvgInt.sub.neighborblock.sub.
(90)
(91) With the face detection apparatus provided by this embodiment of the present invention, face detection is performed on a to-be-detected image by using a strong classifier or a face detector that is learned in a training process, so that a face is differentiated from a non-face, and face detection results are combined and output. During this process, each FBLBP feature includes one pivot block and at least one neighbor block. The pivot block and the neighbor block are equal in size, and positions of each neighbor block and the pivot block are not strictly limited. Therefore, flexibility is high, robustness is improved, and meanwhile a false detection rate is reduced.
(92) Optionally, in the foregoing embodiment, the traversing module 31 is configured to traverse the to-be-detected image in a detection window with a preset length-width ratio; enlarge the detection window by the preset length-width ratio according to a preset step to traverse the to-be-detected image, and repeat this operation to traverse the to-be-detected image in a different detection window until length of the detection window is greater than or equal to length of the to-be-detected image or until width of the detection window is greater than or equal to width of the to-be-detected image; and perform normalization processing on subimages that are obtained by traversing the to-be-detected image as described above, so that a length-width ratio of each subimage conforms to the preset length-width ratio, and use a set that consists of all normalized subimages as the to-be-detected subimage set.
(93) A person of ordinary skill in the art may understand that all or a part of the steps of the method embodiments may be implemented by a program instructing relevant hardware. The program may be stored in a computer readable storage medium. When the program runs, the steps of the method embodiments are performed. The foregoing storage medium includes any medium that can store program code, such as a read-only memory (ROM), a random-access memory (RAM), a magnetic disk, or an optical disc.
(94) Finally, it should be noted that the foregoing embodiments are merely intended for describing the technical solutions of the present invention, but not for limiting the present invention. Although the present invention is described in detail with reference to the foregoing embodiments, persons of ordinary skill in the art should understand that they may still make modifications to the technical solutions described in the foregoing embodiments or make equivalent replacements to some or all technical features thereof, without departing from the scope of the technical solutions of the embodiments of the present invention.