Method of virtually trying on eyeglasses
09817248 · 2017-11-14
Assignee
Inventors
Cpc classification
G06T19/20
PHYSICS
International classification
A61B3/00
HUMAN NECESSITIES
G06T19/00
PHYSICS
G06T19/20
PHYSICS
Abstract
A method of virtually trying on eyeglasses includes capturing a plurality of images of a user's face, obtaining locations of a plurality of feature points on the user's face in the plurality of images, and using the locations of the plurality of feature points in the plurality of images to create a standard three-dimensional model of the user. Next, a selection is received from the user of a pair of virtual eyeglasses, the selected pair of virtual eyeglasses having a corresponding three-dimensional model depicting the size and shape of the selected pair of virtual eyeglasses. After this, a modified three-dimensional model of the user having the selected pair of virtual eyeglasses superimposed on the user's face is created according to the standard three-dimensional model of the user and the corresponding three-dimensional model of the selected pair of virtual eyeglasses. The result is then displayed for the user to see.
Claims
1. A method of virtually trying on eyeglasses, the method comprising: capturing a plurality of images of a user's face; obtaining locations of a plurality of feature points on the user's face in the plurality of images; using the locations of the plurality of feature points in the plurality of images to create a standard three-dimensional model of the user; receiving selection from the user of a pair of virtual eyeglasses, the selected pair of virtual eyeglasses having a corresponding three-dimensional model depicting the size and shape of the selected pair of virtual eyeglasses; creating a modified three-dimensional model of the user having the selected pair of virtual eyeglasses superimposed on the user's face according to the standard three-dimensional model of the user and the corresponding three-dimensional model of the selected pair of virtual eyeglasses; and displaying the resulting modified three-dimensional model of the user having the selected pair of virtual eyeglasses superimposed on the user's face; wherein the position of the user's face in image I.sub.j is calculated as R.sub.jM+t.sub.j, where R.sub.j represents the rotation of the user's face in image I.sub.j and t.sub.j represents the horizontal displacement of the user's face in image I.sub.j; wherein the three-dimensional coordinates of a point P.sup.(k) is equal to (P.sub.x.sup.(k),P.sub.y.sup.(k),P.sub.z.sup.(k)); and wherein two-dimensional coordinates of point P.sub.j.sup.(k) in image I.sub.j are calculated according to the equation
2. The method of claim 1, wherein when capturing the plurality of images of a user's face, the user's face is rotated for depicting the user's face with different angles of rotation in the plurality of images.
3. The method of claim 2, wherein the user's face maintains a same expression as the user's face is rotated.
4. The method of claim 1, wherein obtaining locations of the plurality of feature points on the user's face in the plurality of images comprises obtaining locations of eight or more feature points on the user's face in the plurality of images.
5. The method of claim 1, wherein obtaining locations of the plurality of feature points on the user's face in the plurality of images comprises obtaining locations of approximately 95 feature points on the user's face in the plurality of images.
6. The method of claim 1, wherein positions of the plurality of feature points are selected from a group consisting of outer edges of the eyes, outer edges of the mouth, outer edges of the nose, outer edges of the eyebrows, and outer edges of the face.
7. The method of claim 1, wherein capturing the plurality of images of the user's face comprises capturing ten or more images of the user's face.
8. The method of claim 1, wherein the standard three-dimensional model of the user is calculated as M ={P.sup.(k)}.sub.k=1.sup.n, where M represents the standard three-dimensional model of the user, P represents three-dimensional coordinates of an individual feature point of the plurality of feature points, n represents the total number of the plurality of feature points, and k is an integer incremented from 1 to n.
9. The method of claim 1, wherein in the modified three-dimensional model of the user having the selected pair of virtual eyeglasses superimposed on the user's face, superimposing comprises hiding a temple of the selected pair of virtual eyeglasses when the temple would be blocked by the user's face as the user's face is rotated.
10. The method of claim 1, further comprising: receiving selection from the user of a different pair of virtual eyeglasses, the different pair of virtual eyeglasses having a corresponding different three-dimensional model depicting the size and shape of the different pair of virtual eyeglasses; and creating an updated modified three-dimensional model of the user having the different pair of virtual eyeglasses superimposed on the user's face according to the standard three-dimensional model of the user and the corresponding different three-dimensional model of the different pair of virtual eyeglasses.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
DETAILED DESCRIPTION
(19) Please refer to
(20) Please refer to
(21) Step 50: Start.
(22) Step 52: Initialize the system 10.
(23) Step 54: Input a sequence of images of a user. The images may be captured using the camera 12.
(24) Step 56: Obtain locations of a plurality of feature points on the user's face in the plurality of images.
(25) Step 58: Determine if a standard three-dimensional model of the user is finished being built according to the images of the user and the locations of the plurality of feature points. If so, go to step 62. If not, go to step 60.
(26) Step 60: Continue building the standard three-dimensional model of the user, and go back to step 54.
(27) Step 62: The user selects a pair of virtual eyeglasses to try on, the pair of virtual eyeglasses having a corresponding three-dimensional model depicting the size and shape of the selected pair of virtual eyeglasses.
(28) Step 64: Superimpose the selected pair of virtual eyeglasses on the standard three-dimensional model of the user in order to illustrate what the user would look like if the user were wearing the selected pair of virtual eyeglasses. The result is a modified three-dimensional model of the user, which is then displayed to the user.
(29) Step 66: Determine if the user wishes to view a different pair of eyeglasses. If so, go back to step 62. If not, go to step 68.
(30) Step 68: End.
(31) The present invention produces a virtual picture of the user in which an image of a selected pair of virtual eyeglasses is superimposed on the user's face according to the standard three-dimensional model of the user and the corresponding three-dimensional model of the selected pair of virtual eyeglasses. Once the standard three-dimensional model of the user is created, the modified three-dimensional model of the user can be generated quickly after the user selects the pair of virtual eyeglasses to try on. Since the standard three-dimensional model of the user is built using the locations of the plurality of feature points on the user's face in the plurality of images and since the corresponding three-dimensional model of the selected pair of virtual eyeglasses also contains location information regarding where the selected pair of virtual eyeglasses is placed on the user's face, the modified three-dimensional model of the user represents a true indication of how the selected pair of virtual eyeglasses would look on the user.
(32) First of all, in order to superimpose the virtual eyeglasses on the user's face, the standard three-dimensional model of the user needs to be calculated. Based on the standard three-dimensional model of the user, not only can the exact location on the user's face for the superimposed virtual eyeglasses be determined, but also the overlap between the superimposed virtual eyeglasses and the face can be dealt with as well.
(33) Since each user's face has unique features, multiple feature points on the face are sampled. Each feature point represents the different characteristics of the user's face, such as the corners and outer edges of the eyes, corners and outer edges of the mouth, corners and outer edges of the nose, outer edges of the eyebrows, and outer edges of the face. After obtaining the locations of the plurality of the features points on the user's face in the plurality of captures images of the user, the standard three-dimensional model M of the user can be calculated using all of this data taken as a whole. Each captured image of the user will slightly differ from the standard three-dimensional model M of the user due to differences in the rotation of the user's face and the horizontal displacement of the user's face.
(34) Please refer to
(35) How to decide exactly which locations on the user's face should be chosen for being feature points is disclosed in other patents assigned to the applicant of the present invention, including U.S. Pat. Nos. 7,953,253 and 8,295,557, each of which is incorporated by reference in its entirety.
(36) The standard three-dimensional model M of the user is calculated using n three-dimensional feature points P, and the collection of these points can be represented as {P.sup.(k)}.sub.k=1.sup.n. When the user's face is rotating as the images of the user's face are being captured, the equation RM+t can be used to represent the three-dimensional model of the user in any given image, where R represents the rotation of the user's face in an image, and t represents the horizontal displacement of the user's face in the image. Therefore, when trying to calculate RM+t, it is necessary to calculate the coordinates of all the points {P.sup.(k)}.sup.k=1.sub.n at a given rotation R and horizontal displacement t.
(37) Please refer to
(38) The three-dimensional coordinates of a certain point P.sup.(k) can be represented as (P.sub.x.sup.(k),P.sub.y.sup.(k),P.sub.z.sup.(k)). When the camera 12 captures image I.sub.j, the three-dimensional model of the user in image I.sub.j can be calculated as being equal to R.sub.jM+t.sub.j. From this, the updated three-dimensional coordinates of P.sup.(k) can be calculated as (P.sub.x.sup.(k)′,P.sub.y.sup.(k)′,P.sub.z.sup.(k)′).sup.T=R.sub.j(P.sub.x.sup.(k),P.sub.y.sup.(k),P.sup.(k).sub.z).sup.T+t.sub.j, where T indicates a matrix transpose function for reflecting a matrix over its main diagonal, and which is symbolized by [A.sup.T].sub.ij=[A].sub.ji. If the updated points P.sup.(k) in image I.sub.j are p.sub.j.sup.(k), and the two-dimensional coordinates of p.sub.j.sup.(k) are (p.sub.j,x.sup.(k),p.sub.j,y.sup.(k)), are then according to the aperture imaging principle, the three-dimensional coordinates of point P.sup.(k) are equal to (P.sub.x.sup.(k),P.sub.y.sup.(k),P.sub.z.sup.(k)) and the two-dimensional model (p.sub.j,x.sup.(k),p.sub.j,y.sup.(k)) of point p.sub.j.sup.(k) satisfy the relationship given in equation (1):
(39)
(40) In equation (1), A represents known coefficients of the camera 12 capturing the image I.sub.j. The unknown elements in equation (1) are the three-dimensional coordinates (P.sub.x.sup.(k),P.sub.y.sup.(k),P.sub.z.sup.(k)), the rotation matrix R.sub.j, and the horizontal displacement t.sub.j. Equation (1) shows the relationship between the three-dimensional coordinates of points in the standard three-dimensional model M of the user and the corresponding two-dimensional coordinates of the points. However, if we wish to know the three-dimensional coordinates of a point and we only know the two-dimensional coordinates of the point, then this is not enough information. In this case, it would be necessary to have the two-dimensional coordinates of the point in at least two images.
(41) Similarly, when the camera 12 captures image I.sub.i, the two-dimensional coordinates of P.sup.(k) are (p.sub.i,x.sup.(k),p.sub.i,y.sup.(k)). The three-dimensional coordinates of point P.sup.(k) are equal to (P.sub.x.sup.(k),P.sub.y.sup.(k),P.sub.z.sup.(k)) and the two-dimensional model (p.sub.i,x.sup.(k),p.sub.i,y.sup.(k)) of point P.sup.(k) satisfy the relationship given in equation (2):
(42)
(43) If we pick N points {P.sup.(k)}.sub.k=k.sub.
(44) After finishing calculating the standard three-dimensional model M of the user, for any image I.sub.j once we have obtained the 95 feature points for image I.sub.j we can use equation (1) to quickly calculate the value R.sub.jM+t.sub.j for the user's face in that image I.sub.j. Furthermore, since we now know the value R.sub.jM+t, we can accurately calculate the three-dimensional position of the superimposed virtual eyeglasses as well as the relationship between the three-dimensional superimposed virtual eyeglasses and the standard three-dimensional model M of the user. Based on the aperture of the camera 12, we can get the accurate position of the virtual eyeglasses in image I.sub.j.
(45) Please refer to
(46) Please refer to
(47) Please refer to
(48) Please refer to
(49) Please refer to
(50) In summary, the present invention provides a way for the user to virtually try on different styles of eyeglasses without actually having to travel to an eyeglasses store. This allows the user to shop online for eyeglasses, thereby saving the user a considerable amount of time and also enabling the user to browse a much larger selection of different eyeglasses styles and sizes than available in traditional brick and mortar eyeglasses stores. Furthermore, by using three-dimensional modeling, the user will be given a chance to see exactly how the eyeglasses will look on the user. The present invention can be applied to any type of eyeglasses, including sunglasses, prescription eyeglasses, eyeglasses containing no lenses, and eyeglasses containing no prescription lenses.
(51) Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.