Method for estimating a 3D vector angle from a 2D face image, method for creating face replacement database, and method for replacing face image
09639738 ยท 2017-05-02
Assignee
Inventors
Cpc classification
G06V40/171
PHYSICS
International classification
Abstract
A method for estimating a 3D vector angle from a 2D face image, a method for creating a face replacement database and a method for replacing a face image includes steps of capturing a face image, detecting a rotation angle of the face image, defining a region to be replaced in the face image, creating a face database for storing replaced images corresponding to the region to be replaced, and pasting one of the replaced images having the corresponding rotation angle of the face image into a target replacing region. Therefore, the region to be replaced of a static or dynamic face image can be replaced by a replaced image quickly by a single camera without requiring a manual setting of the feature points of a target image. These methods support face replacement at different angles and compensate the color difference to provide a natural look of the replaced image.
Claims
1. A method for estimating a 3D vector angle from a 2D face image, comprising the steps of: creating a feature vector template including a feature vector model of a plurality of different rotation angles; detecting corners of the eyes and corners of the mouth in a face image to be processed, and defining the corners of the eyes and the corners of the mouth as vertices of a quadrilateral, respectively; defining a sharp point displaced in an orthogonal direction relative to the quadrilateral plane, and converting the vertices into 3D coordinates, wherein the sharp point and the vertices of the quadrilateral form a quadrangular pyramid; computing the four 3-D vectors, each 3-D vector extending from the sharp point to a respective one of the four vertices, wherein the coordinates of said 3-D vectors are 3D coordinates, and wherein said 3-D vectors are computed to obtain a vector set, and matching the vector set with the feature vector model to obtain an angle which has the shortest distance between a feature vector model and the vector set of said four 3-D vectors, and defining the angle value as a rotation angle of the input face image.
2. The face angle estimation method of claim 1, further comprising the steps of: computing the height and the coordinates of the centroid of the quadrilateral; and extending the height of the quadrilateral to a predetermined multiple from the centroid of the quadrilateral towards the orthogonal direction relative to the quadrilateral plane to define the sharp point.
3. The face angle estimation method of claim 1, wherein the feature vector model is defined by multiplying vector rotation matrixes in a range of the rotation angle with respect to the X-axis, Y-axis and Z-axis.
4. The face angle estimation method of claim 3, further comprising the steps of: defining a standard eyes distance of the feature vector template; and computing the distance of the vertices of the corners of eye to perform a scale normalization of the quadrangular pyramids according to the standard eyes distance and the distance between the vertices of the corners of eye.
5. The face angle estimation method of claim 1, further comprising the steps of: detecting two eye regions and a mouth region of the face image, and searching a corner of eye and a corner of mouth in the eye regions and the mouth region respectively.
6. The face angle estimation method of claim 5, further comprises the steps of defining the first region of interest (ROI) located at both left and right halves of the upper part of the face image separately and then detecting the eyes in the first ROI.
7. The face angle estimation method of claim 5, further comprising the step of defining a second region of interest situated at the one-third portion of a lower part of the face image and detected in the mouth region.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
(11) The present invention will become more obvious from the following description when taken in connection with the accompanying drawings which show, for purposes of illustration only, a preferred embodiment in accordance with the present invention.
(12) With reference to
(13) S001: Create a feature vector template, wherein the feature vector template includes a feature vector model of a plurality of different rotation angles and a standard eyes distance which is the distance between two eyes of the face image and used for scale normalization. In general, the feature vector model is created offline in advance. For a general user's face rotation, the rotation is performed within a range of rotation angles (say from 30 to 30) with respect to the X-axis. Y-axis and Z-axis, so that if the X-axis, Y-axis and Z-axis are quantized into rotation units of N.sub.1, N.sub.2 and N.sub.3 respectively, then a feature vector model containing N feature vectors is formed, and its mathematical equation 1 is given below:
N=N.sub.1N.sub.2N.sub.3[Mathematical Equation 1]
(14) Therefore, the vector rotation matrixes within a range of the rotation angles with respect to the X-axis, Y-axis and Z-axis are multiplied to define a feature vector model.
(15) S002: Capture a static or dynamic image by a camera such as a webcam or capture a single face image 1 through a transmissionnetwork as shown in
(16) S003: Define the corner of eyes and the corners of mouth are the vertices P.sub.1, P.sub.2, P.sub.3, P.sub.4 of a quadrilateral respectively as shown in {right arrow over (OP.sub.3)},
{right arrow over (OP.sub.4)} from the sharp point O to the vertices P.sub.1, P.sub.2, P.sub.3, P.sub.4 are computed to obtain a vector set. In
(17) S004: Compare the vector set with the feature vector model to obtain an angle which has the shortest distance between the feature vector model and the vector set. Define the angle value as the rotation angle of the input face image 1.
(18) In
(19) S005: Create a face database for storing a plurality of replaced images with a face image rotation angle by using, the method for estimating a 3D vector angle from a 2D face image, and obtain a replaced image 2 of the rotation angle of the face image 1, wherein the replaced image 2 is obtained by capturing a face image 1 by a camera such as a webcam, and the face image 1 may be a static or dynamic image, or by selecting and uploading a static or dynamic image by users, and the rotation angles of the face image 1 detected by the face angle estimation method are saved one by one to form the replaced image 2.
(20) S006: Define a target replacing region 21 in the replaced image 2 to assure the replacement of the replacing portion by the replaced image 2, wherein the region to be replaced is a surface region formed by the vertices P.sub.1, P.sub.2, P.sub.3, P.sub.4. In a preferred embodiment, a center point C is obtained respectively between two adjacent vertices P.sub.1, P.sub.2, P.sub.3, P.sub.4, and the center point C is shifted towards the exterior of the quadrilateral, and an arc is used for connecting the vertices and the center point to form a target replacing region 21. In order to provide a natural look of the replaced image 2, the arc is a parabola, and the surface region is preferably in the shape of a convex hull. In
(21) With reference to
(22) S007: Capture a face image 1 through a camera such as webcam, and the face image 1 may be a static or dynamic image, or select and upload a static or dynamic face image 1 by users, and a rotation angle of the face image 1 is detected according to the method for estimating a 3D vector angle from a 2D face image.
(23) S008: Define a region to be replaced 15 in the face image 1 as shown in
(24) S009: Search the replaced image 2 with the rotation angle of the corresponding face image 1, so that the target replacing region 21 of one of replaced images 2 corresponding to the rotation angle of the face image 1 is pasted onto the region to be replaced 15.
(25) S010: Since the sewing and processing portion of the face has been processed by adjusting the color and brightness of the source image, and processing the boundary between sewing portions of the image, the result must be adjusted after the replacement takes place, so as to give a more natural and coordinative image. However, the color and brightness of the region to be replaced 15 and the target replacing region 21 have a difference to a certain extent, so that it is necessary to adjust the color and brightness of the target replacing region 21 to provide a natural visual effect of the replaced image. Therefore, the statistics of the histograms of R channel, G channel and B channel in RGB color space of the region to be replaced 15 and the target replacing region 21 are taken and normalized into a probability (i), while avoiding a black region from affecting the computation result. In the computation process, 0 is not included in the range, and the probability is used for computing the expected values of the region to be replaced 15 and target replacing region 21 as shown in the following mathematical equation 2:
(26)
(27) Therefore, zoom factors of the R channel, G channel and B channel between the region to be replaced 15 and the target replacing region 21 are computed, and the values of the R channel, G channel and B channel of the target replacing region 21 are computed according to the zoom factor as given in the following mathematical equation 3:
C.sub.i=C.sub.i*w.sub.i, i=1(B),2(G),3(R),[Mathematical Equation 3]
(28) S011: Although the color of the target replacing region 21 after being replaced may match the expected value of the color of the replaced region, yet there may be a slight difference of the color and brightness at the boundary. To compensate the color and bright difference, the transparency value of the pixels at the boundary of the target replacing region 21 or outside the boundary is set a higher value, and a decreasingly lower value at a position progressively moving towards the inside of an edge of the region to be replaced. The compensation can be represented as the e following mathematical equation 4.
I.sub.dst(x,y)=I.sub.src(x,y)+(1)I.sub.tgt(x,y)[Mathematical Equation 4]
(29) Wherein, I.sub.dst(xy) is an image of the region after compensation; I.sub.src(xy) is an image of the target replacing region 21; I.sub.tgt(xy) is an image of the region to be replaced 15, and is the weight in the range [0,1], so that the image to be replaced 2 may be gradually layered and pasted on the region to be replaced 15 by an edge feathering method as shown in
(30) While we have shown and described the embodiment in accordance with the present invention, it should be clear to those skilled in the art that further embodiments may be made without departing from the scope of the present invention.