Gesture-based editing of 3D models for hair transplantation applications
09767620 · 2017-09-19
Assignee
Inventors
- Steven X Li (Sunnyvale, CA, US)
- Hui Zhang (San Jose, CA)
- Ognjen Petrovic (Mountain View, CA, US)
- Gabriele Zingaretti (Capitola, CA, US)
Cpc classification
G06F3/04842
PHYSICS
G06T19/20
PHYSICS
A61B2034/105
HUMAN NECESSITIES
G06F3/04845
PHYSICS
A61B34/10
HUMAN NECESSITIES
International classification
G06T19/20
PHYSICS
A61B34/10
HUMAN NECESSITIES
G06F3/0488
PHYSICS
G06F3/0484
PHYSICS
G06F3/0481
PHYSICS
Abstract
Methods and systems are provided for gesture-based editing of three-dimensional (3D) models of real targets, for example, for use in planning hair transplantation procedures. According to some embodiments of the methodology disclosed, a 3D control points on an initial default model are matched automatically with the drawing of an outline of a target feature that a user wishes to define and deformed appropriately to quickly and accurately modify and update the initial default model into a resulting fitting model of the real target.
Claims
1. A method of gesture-based editing of a 3D model of a real target comprising the following steps: selecting a 2D projected view of a 3D default model, wherein the 3D default model comprises an overall set of vertices, wherein a plurality of subsets of the overall set of vertices forms a plurality of 3D feature groups, each 3D feature group comprises an ordered set of feature group points that are associated with a respective specific feature of the default model; selecting a 2D image of a real target with a view which corresponds to the selected 2D projected view of the default model, and displaying the selected 2D image of the real target on a display; acquiring at least one line traced by a user over the 2D image of the real target on the display that follows at least one specific feature of the real target visible in the 2D image which correspond to at least one specific feature group; identifying an ordered set of user points belonging to the at least one traced line; matching the user points against the feature group points of the at least one specific feature group in a transformed 2D space of first transformed coordinate L and second transformed coordinate Th, where the first transformed coordinate L of a point is a normalized length in a Cartesian coordinate system of the point in its ordered set of points joined by segments, and the second transformed coordinate Th of the point is an angle of discrete vector difference in the Cartesian coordinate system between two discrete successive vectors respectively following and preceding the point in its ordered set, wherein a subset of user points is identified as a best match of the feature group points of the at least one specific feature group; and updating the 3D default model by applying to the feature group points of the at least one specific feature group 3D geometric difference vectors calculated for each pair of a feature group point and a respective user point identified as best match of such feature group point.
2. The method of claim 1, wherein the updating the default model further comprises smoothing the 3D model for obtaining a fitting model of the real target.
3. The method of claim 2, wherein smoothing is based on one or more techniques selected from the group comprising cubic splines, Bezier curves and Gaussian kernel convolution.
4. The method of claim 2, wherein smoothing includes applying any local distortion between the default model and the fitting model to a texture map grabbed from the 2D image of the real target.
5. The method of claim 1, further comprising displaying the 2D projected view of the 3D default model.
6. The method of claim 1, wherein matching is based on a least distance criterion in the transformed 2D space.
7. The method of claim 6, wherein matching includes: calculating first transformed coordinate L for each feature group point of the at least one specific feature group and for each user point, for each feature group point of the at least one specific feature group, identifying the user point having first transformed coordinate L closest to the first transformed coordinate L of the feature group point under consideration, whereby a subset of user points are selected to be provisionally anchored to the feature group points of the at least one specific feature group, using each point of the subset of user points as starting points for a recursive best match search for each feature group point of the at least one specific feature group, wherein the recursive best match search searches for a user point of the ordered set of user points belonging to the at least one traced line having the least distance from the feature group point in the transformed 2D space of first transformed coordinate L and second transformed coordinate Th, whereby for each feature group point of the at least one specific feature group a user point of the ordered set of user points belonging to the at least one traced line is identified as best match of the feature group point.
8. The method of claim 7, wherein for each feature group point of the at least one specific feature group the recursive best match search includes: assuming the provisionally anchored user point as guess best match of the feature group point, selecting a tuple of successive points of the ordered set of user points belonging to the at least one traced line including the guess best match, comparing distances in the transformed 2D space of first transformed coordinate L and second transformed coordinate Th of the successive points of the tuple from the feature group point, whereby a provisional least distant point having the least distance from the feature group point is identified, stopping recursion when the provisional least distant point is the guess best match, otherwise assuming the provisional least distant point as guess best match of the feature group point and making another recursion of selecting a tuple of successive points of the ordered set of user points and comparing distances of the successive points of the tuple from the feature group point.
9. The method of claim 8, wherein a tuple of successive points of the ordered set of user points includes three or more points.
10. The method of claim 9, wherein a tuple of successive points of the ordered set of user points includes an odd number of points, wherein the guess best match is a central point of the tuple.
11. The method of claim 1, wherein a ratio of total number of user points to total number of feature group points of the at least one specific feature group ranges from 20 to 5000.
12. The method of claim 1, wherein the 2D projected view is selected from the group comprising five orthogonal 2D projected views including a front view, a back view, a left view, a right view, and a top view.
13. The method of claim 1, wherein the 2D projected view is selected from the group comprising six orthogonal 2D projected views including a front view, a back view, a left view, a right view, a top view and a bottom view.
14. The method of claim 1, wherein the user traces the at least one line by interacting with at least one pointing device.
15. The method of claim 14, wherein the at least one pointing device includes a mouse, a keyboard, a track pad, a track ball, a pointing devices, a stylus, a pen, and/or a touch-screen display.
16. The method of claim 1, wherein acquiring the at least one line traced by a user over the display includes receiving an indication of the at least one specific feature group of the default model to which the at least one traced line corresponds.
17. The method of claim 16, wherein receiving an indication includes selecting the at least one specific feature group of the default model from a list of icons displayed in a menu on the display.
18. The method of claim 1, further comprising providing for user feedback by allowing the user to accept the updated default model, or delete any previous change of the default model, or to further modify the updated default model by setting the updated default model as a new default model and repeating the steps of the method.
19. The method of claim 1, further comprising allowing the user to combine the method with one or more other 3D model editing techniques.
20. The method of claim 1, wherein the real target is at least one part of a person's body and/or at least one object.
21. The method of claim 20, wherein the real target is a person's head.
22. The method of claim 21, wherein the plurality of 3D feature groups includes one or more of the feature groups selected from the group comprising a jaw line feature group, a laryngeal prominence Feature group, an upper lip feature group, a lower lip feature group, a mouth feature group, a left ear feature group, a brow ridge feature group, a front hairline feature group, and a top head contour feature group.
23. A method of gesture-based editing of a 3D model of a real target comprising the following steps: selecting and displaying on a display, a 2D projected view of a 3D default model, wherein the 3D default model comprises an overall set of vertices, wherein a plurality of subsets of the overall set of vertices forms a plurality of 3D feature groups, each one of which includes an ordered set of feature group points and is associated to a respective specific feature of the default model; selecting a 2D image of a real target with a view which corresponds to the selected 2D projected view of the default model, and displaying the selected 2D image of the real target on the display; acquiring at least one line traced by a user over the 2D image of the real target on the display that follows at least one real specific feature of the real target visible in the 2D image which correspond to at least one specific feature group; identifying an ordered set of user points belonging to the at least one traced line; matching the user points against the feature group points of the at least one specific feature group in a transformed 2D space of first transformed coordinate L and second transformed coordinate Th, where first transformed coordinate L of a point is a normalized length in a Cartesian coordinate system of the point in its ordered set of points joined by segments, and second transformed coordinate Th of the point is an angle of discrete vector difference in the Cartesian coordinate system between two discrete successive vectors respectively following and preceding the point in its ordered set, wherein a subset of user points is identified as a best match of the feature group points of the at least one specific feature group; updating the default model into a 3D intermediate model by applying to the feature group points of the at least one specific feature group 3D geometric difference vectors calculated for each pair of feature group point and respective user point identified as best match of such feature group point; and smoothing the 3D intermediate model for obtaining a fitting model of the real target.
24. A system configured to execute a method of gesture-based editing of a 3D model of a real target comprising: one or more processors configured to execute machine-readable instructions; a memory for storing machine-readable instructions and data implementing the method of gesture-based editing of a 3D model of a real target; and an input/output interface connected to the one or more processors to allow a user to interact with the system, wherein the input/output interface includes a display; wherein the one or more processors are connected to the memory to execute the machine-readable instructions, the instructions comprising the following steps: selecting a 2D projected view of a 3D default model, the 3D model comprising an overall set of vertices, wherein a plurality of subsets of the overall set of vertices form a plurality of 3D feature groups, each 3D feature group comprising an ordered set of feature group points that are associated with a respective specific feature of the default model; selecting a 2D image of a real target with a view which corresponds to the selected 2D projected view of the default model and displaying the selected 2D image of the real target on the display; acquiring at least one line traced by a user over the 2D image of the real target on the display that follows at least one real specific feature of the real target visible in the 2D image which correspond to at least one specific feature group; identifying an ordered set of user points belonging to the at least one traced line; matching the user points against the feature group points of the at least one specific feature group in a transformed 2D space of first transformed coordinate L and second transformed coordinate Th, where the first transformed coordinate L of a point is a normalized length in a Cartesian coordinate system of the point in its ordered set of points joined by segments, and the second transformed coordinate Th of the point is an angle of discrete vector difference in the Cartesian coordinate system between two discrete successive vectors respectively following and preceding the point in its ordered set, wherein a subset of user points is identified as a best match of the feature group points of the at least one specific feature group; and updating the default model by applying to the feature group points of the at least one specific feature group 3D geometric difference vectors calculated for each pair of a feature group point and a respective user point identified as best match of such feature group point.
25. The system of claim 24, further comprising smoothing for obtaining a fitting model of the real target.
26. The system of claim 24, wherein the display is a touch-screen display.
27. The system of claim 24, wherein the input/output interface means further includes one or more of the following: a keyboard, a pointing device, a port configured to acquire images, and one or more cameras connected to the processing means, wherein the processing means is configured to acquire at least one gesture of the user.
28. The system of claim 24, further comprising a robotic arm.
29. The system of claim 24, wherein the system is configured to plan hair transplantation procedures.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) It should be noted that the drawings are not to scale and are intended only as an aid in conjunction with the explanations in the following detailed description. In the drawings, identical reference numbers identify similar elements or acts. The sizes and relative positions of elements in the drawings are not necessarily drawn to scale. For example, the shapes of various elements and angles are not drawn to scale, and some of these elements are arbitrarily enlarged and positioned to improve drawing legibility. Further, the particular shapes of the elements as drawn are not intended to convey any information regarding the actual shape of the particular elements and have been solely selected for ease of recognition in the drawings. Features and advantages of the present disclosure will become appreciated as the same become better understood with reference to the specification, claims, and appended drawings wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(12) In the following Detailed Description, reference is made to the accompanying drawings that show by way of illustration some examples of embodiments in which the invention may be practiced. In this regard, directional terminology, such as “right”, “left”, “front”, “back”, “top”, “vertical”, etc., are used with reference to the orientation of the Figure(s) being described. Because components, elements or embodiments of the present invention can be positioned or operated in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
(13) Embodiments of the methods of the present disclosure are implemented using computer software, firmware or hardware processing means, including processors, microprocessors, microcontrollers, and DSPs configured to execute machine-readable instructions. Various programming languages and operating systems may be used to implement the methodology described herein.
(14) In the cosmetic surgery field, equipment showing on a display the future results of the scheduled cosmetic surgery through 3D models of patients improves patient confidence on a positive outcome of the procedure. Examples of such cosmetic surgery are hair transplantation procedures. In hair transplantation in particular, it would be useful if the physician was able to propose potential visual appearances and aesthetic outcomes to a patient, along with a discussion of the time and/or cost associated with each proposed treatment plan. In this manner the patient would be able to see what he/she may look like in each of the scenarios discussed, thus reducing the chances of a patient misunderstanding what the physician may be trying to convey to a patient. Hair transplantation may be carried out manually, or using automated (including robotic) systems or computer-controlled systems, such as those described, for example, in the commonly owned U.S. Pat. No. 7,962,192, which is incorporated herein by reference. Such systems may be provided with the aforementioned equipment, such as one or more displays, for planning and showing the future results of hair transplantation on patient's head. In other instances, the displays may be stand-alone systems. Examples of known equipment and methods for use in the planning of hair transplantation procedures are additionally described in commonly owned U.S. Pat. No. 7,806,121 and US Patent Publication No. 2014/0261467, both incorporated herein by reference.
(15) Other procedures that require a model of the patient's body surface and parts, including, facial and head features, for example, various cosmetic and dermatological procedures involving treatment planning (e.g., plastic surgery, wrinkle removal or reduction, injections of cosmetic substances, skin grafting procedures, correction or removal of birth mark defects, facial reconstruction, rhinoplasty, contouring of the eyes or lips, remodeling of ears, nose, eye-lids or chins, facial rejuvenation, laser skin resurfacing, skin tightening, etc.) may benefit from the systems and methods described herein. One example of applicability of the present disclosure is in diagnostic skin imaging for cosmetic or other medical purposes, for example skin grafting or tattoo removal. For convenience of description, the following description will be discussed by example in reference to hair transplantation procedures. It should be noted, however, that such description is for the purposes of illustration and example only and is not intended to be exhaustive or limiting.
(16)
(17) As a starting point S1 a default 3D head model which will be referred to as the default model, similar to the model shown in
(18) A plurality of 3D sets of vertices are associated with the default model, wherein the vertices of each set are located along a line delineating a head model feature. These 3D feature sets of vertices may correspond to respective subsets of vertices picked from the overall set of vertices of the default model. Accordingly, a 3D feature set of vertices may be associated to a specific facial/cranial feature, e.g. the jaw line, of the default model.
(19) Each 3D feature set of vertices may define a specific facial and/or cranial feature of the head model in at least one 2D projected view (usually in at least two different projected views) of five orthogonal 2D projected views of the default model, for example, front, back, left, right, and top views. For instance, the jaw line may be defined by the respective 3D feature set of vertices in the 2D front, left, and right views. Each 3D feature set of vertices is referred to as a feature group, and its vertices are also referred to as feature group points. Optionally, distances between vertices of a feature group are typically on the order of 0.2 to 0.5 units in the texture coordinates, however, other appropriate distances may be used.
(20) Other embodiments of the methods according to the present disclosure may consider a different number and orientation of 2D projected views of the default model, provided that the 2D projected views are at least two. For instance, other embodiments of the methods according to the disclosure may consider six orthogonal 2D projected views of the default model, for example, front, back, left, right, top and bottom views.
(21) By way of example, and not by way of limitation,
(22) In order to quickly and accurately modify the default model into a fitting model better representing a real target, especially if such default model represents some organic form, e.g. a patient's head, that cannot be easily represented by simple geometry, the method according to the present disclosure deforms the feature groups so as to create the fitting model that accurately represents the real target, e.g. the patient's real facial/cranial structure.
(23) In reference to one general example of the methodology shown in
(24) When the user traces a line over the display, he or she may input the specific facial/cranial feature of the default model to which such traced line corresponds (e.g. jaw line or top head contour), by selecting the feature, for example, from a list of icons displayed in a menu on the display. Alternatively, the method could automatically recognize the specific facial/cranial feature of the default model to which such traced line corresponds, e.g. on the basis of the least distance of the pair of end points of the traced line from the pair of end vertices of all possible feature groups defining the specific facial/cranial features of the default model visible in the selected 2D projected view displayed on the display. Other embodiments of the method according to the present disclosure may allow a user to trace two or more lines over the display, wherein said two or more traced lines follow a combination of two or more actual specific facial/cranial features of the patient and correspond to a combination of two or more specific facial/cranial features of the default model visible in the selected 2D projected view displayed on the display.
(25) Having acquired the line traced by the user, in step S3 the system identifies an ordered set of points, which are associated with the traced line. Such points may be considered, for example, as mouse positions for that traced line, i.e. they are identified by Cartesian coordinates in two dimensions.
(26) Step S4 matches the user points of the traced line against the feature group points of the respective feature group of the default model (i.e. of the Feature group defining the specific facial/cranial feature of the default model to which the traced line corresponds) in the selected 2D projected view in a transformed 2D space. For example, matching of step S4 may include transforming the Cartesian coordinates (X, Y) of the user points into a custom coordinate system (L, Th), where L is the arc length percent along the traced line from the start of the line versus the total length of the trace (i.e. L is the normalized length along the ordered set of user points, joined by segments, from the first point of the traced line to the user point of which L is computed), and Th is the angle of the discrete vector difference between two discrete successive vectors, respectively following and preceding the user point of which Th is computed, as it will be better described with reference to
(27) In particular, custom coordinates (L, Th) may be calculated in two successive sub-steps, wherein L is calculated for each feature group point and each user point in the first sub-step, while Th is calculated in the second sub-step (following the first sub-step) for each feature group point and for all or part of the user points, as it will be explained later on.
(28) Assuming that any one of a traced line or a feature of the default model (i.e. a feature group) includes an ordered set of N points P.sub.i (with i=0, 1, . . . , N−1):
{P.sub.0, P.sub.1, . . . , P.sub.i, . . . P.sub.N−1}
L.sub.i for point P.sub.i is the normalized length along the ordered set of points, joined by segments, from the first point P.sub.0 of the ordered set to point P.sub.i. As an example,
{P.sub.0, P.sub.1, P.sub.2, P.sub.3}
whereby the values L.sub.i (i=0, 1, 2, 3) of L for these four points are as follows:
L.sub.0=D(P.sub.0, P.sub.0)/(D(P.sub.3, P.sub.2)+D(P.sub.2, P.sub.1)+D(P.sub.1, P.sub.0))=0
L.sub.1=D(P.sub.1, P.sub.0)/(D(P.sub.3, P.sub.2)+D(P.sub.2, P.sub.1)+D(P.sub.1, P.sub.0))
L.sub.2=(D(P.sub.2, P.sub.1)+D(P.sub.1, P.sub.0)/(D(P.sub.3, P.sub.2)+D(P.sub.2, P.sub.1)+D(P.sub.1, P.sub.0))
L.sub.3=(D(P.sub.3, P.sub.2)+D(P.sub.2, P.sub.1)+D(P.sub.1, P.sub.0))/(D(P.sub.3, P.sub.2)+D(P.sub.2, P.sub.1)+D(P.sub.1, P.sub.0))=1
where D(Q, R) returns the Cartesian distance between points Q and R.
(29) As stated, the L coordinate is computed over all feature group points, hereinafter also referred to as {FG}, and all user points, hereinafter also referred to as {U}. This procedure anchors the relatively many user points to the relatively few feature group points. Namely, the L coordinate of the first point in {FG}, i.e. L.sub.0 in {FG}, is matched with the L coordinate of the first point in {U}, i.e. with L.sub.0 in {U}, since they are both equal to 0 (zero). With reference to
(30) At this stage, the first point (i.e. vertex) in {FG} having L coordinate equal to L.sub.0 still corresponds to the first point V.sub.0 in the ordered set of feature group points stored in Cartesian coordinates (X, Y); similarly, the second point in {FG} having L coordinate equal to L.sub.1 still corresponds to the second point V.sub.1 in the set of feature group points stored in Cartesian coordinates (X, Y), and so on. The first point U.sub.0 in {U} having L coordinate equal to L.sub.0 still corresponds to the first point in the ordered set of user points stored in Cartesian coordinates (X, Y); however, the point in {U} having L coordinate closest to L.sub.1 is the n1-th point U.sub.n1 in the set of user points stored in Cartesian coordinates (X, Y), where n1 is an index between 0 and the total number of points in {U}. Similarly, step 4 finds (N−2) points U.sub.n2, . . . U.sub.nN−1 in {U} (where N is the number of points in the ordered set of feature group points) having the L coordinates respectively closer to the L coordinates of the remaining (N−2) points of the feature group points V.sub.2, . . . , V.sub.N−1, hence anchoring each one of these N points {U.sub.0, U.sub.n1, . . . , U.sub.ni, . . . U.sub.nN−1} found in {U} to a respective one of the N points {V.sub.0, V.sub.1, . . . , V.sub.i, . . . V.sub.N−1} of the feature group.
(31) This anchoring is a sort of provisional matching of the many user points to the relatively few feature group points (wherein a subset of user points are anchored to the feature group points on the basis of the L coordinate only). In order to guarantee optimal curve matching, such provisional matching is then refined by taking into account how successive points in each one of the two sets of points (namely feature group points and user points) move relative to each other and by minimizing any movement difference between the two sets of points in correspondence to the anchored pairs of points identified on the basis of the L coordinate only. To this end, step S4 may take into account the second custom coordinate Th as follows.
(32) For each one of the feature group points {V.sub.0, V.sub.1, . . . , V.sub.i, . . . V.sub.N−1}, angle Th is calculated as follows:
(33)
wherein (R−Q) indicates the 2D vector from point Q to point R, and wherein the angle Th.sub.N−1 of the last point V.sub.N−1 in the ordered set of feature group points is just equal to the angle Th.sub.N−2 of the last-but-one point V.sub.N−2. An example of the computation of angle Th is shown in
(34)
again, wherein the angle Th.sub.nN−1 of the last point U.sub.nN−1 in the ordered set of user points is just equal to the angle Th.sub.nN−2 of the last-but-one point U.sub.nN−2.
(35) Refinement of the provisional matching of the subset of User points anchored to the feature group points on the basis of the L coordinate uses the points {U.sub.0, U.sub.n1, . . . , U.sub.ni, . . . U.sub.nN−1} of this subset as starting points for a recursive best match search. For the sake of simplicity, in the following reference is made only to the second point U.sub.n1 of the subset of User points and respective second vertex V.sub.1 of the feature group points, but this is made without impairing generality of the approach for any one of the points {U.sub.0, U.sub.n1, . . . , U.sub.ni, . . . U.sub.nN−1} of the subset of user points, for which similar considerations apply. Thus, with reference to
D.sup.2=(Th.sub.h−Th.sub.k).sup.2+(L.sub.h−L.sub.k).sup.2.
(36) If the outcome of this comparison is that point U.sub.n1 is indeed the closest to vertex V.sub.1, then recursion of the best match search is stopped and point U.sub.n1 is identified as the best match of vertex V.sub.1. However, if the first recursion of the recursive best match search ascertains that point U.sub.n1−1 preceding point U.sub.n1 in {U} is the closest one, then the recursive best match search is applied to a novel triplet on the left of point U.sub.n1 in
(37) Other embodiments of the method according to the disclosure may execute the recursive best match search looking at three successive points at an end point (instead of the central point) which coincides with the point that the preceding recursion has recognized as closest to the vertex of the feature group under consideration.
(38) Further embodiments of the method according to the disclosure may execute the recursive best match search looking at more than three points, e.g. four or five points, the distances of which from the vertex of the feature group under consideration are to be compared to each other. In some embodiments, the number of such points, the distances of which are to be compared, could vary during several recursions, such as from one recursion to another one, e.g. this number could decrease from the first recursion to the subsequent ones until it is reduced down to a lower limit.
(39) In other embodiments of the method according to the disclosure, step S4 may first compute the custom coordinate system (L, Th) for all the points of the whole ordered set {U} of user points and then may search for the user point closest to each point (i.e. vertex) of the feature group points by comparing the distances—calculated in the custom coordinate system (L, Th)—of at least some or all the user points from the considered feature group point.
(40) In other words, in such embodiment step S4 transforms Cartesian coordinates of both points {U} belonging to the line traced by the user and points {FG} of the respective feature group (i.e. of the specific facial/cranial feature of the default model to which such traced line corresponds) to custom coordinates (L, Th) where it matches the appropriate user point of the line traced by the user to each feature group point according to a least distance criterion (in the custom coordinate system). This entails a number of advantages when compared to a proximity-based approach applied in the Cartesian coordinate system, since the transform-based approach according to the disclosure allows the user to define features on multiple scales. For instance, a side profile of a person's nose and mouth contains features at coarse scale (forehead portion of the curve) and fine scale (curvature of the nose, mouse/lips): a purely proximity-based approach for such a feature group would generally mismatch contour points near portions containing detailed variations, while the transform-based approach according to the disclosure allows the user to better fit the line traced by the user to the respective feature group.
(41) Again with reference to
(42) The subset of user points replacing the former feature group 2180 has sharp corners, since it is linearly interpolated (as any other feature group). Therefore, next optional step S6 in
(43) Advantageously, step S6 may apply any local distortion between the default model and the fitting model to the texture map grabbed from the original patient's photograph(s) to get a realistic end model. For instance, in
(44) In step S7, the updated smoothed model is optionally recorded. The fitting model may be displayed on a display so that the user can appreciate the update of the default model resulting from the modifications caused by the traced line.
(45) In some embodiments, the default may be to accept the updated default model (the fitting model), as substantially corresponding to the gesture-based edited 3D default model. In other embodiments, in step S8, the method according to the disclosure may optionally allow for a feedback from the user on the resulting fitting model. If the user is satisfied, by default or by inputting an acceptance command, and want to accept the resulting model without any further modifications, then the method ends (see “Yes” branch exiting from step S8 in
(46) For instance, in a second repetition of the method the user could trace a line 410 following the patient's jaw line shown in
(47) Advantageously, the initial default model and any or all the subsequently obtained fitting model may be stored, along with the history of model deformations the user performs, in a memory. Also, the custom coordinates (L, Th) of all the feature group points of all the feature groups of the initial default model could be stored in the memory, although this is not essential to the methodology disclosed, since cost of computation of the custom coordinates (L, Th) for all the feature group points of one (or more) feature group(s) is not burdensome.
(48) The technique described here could be useful for a wide range of 3D model applications that are normally not achievable or difficult to achieve through currently available modeling techniques, such as those used in CAD and gaming applications.
(49) As will be appreciated by those skilled in the art, the methods of the present disclosure may be embodied, at least in part, in software and carried out in a computer system or other data processing system. Therefore, in some exemplary embodiments hardware may be used in combination with software instructions to implement the present disclosure.
(50) A machine-readable medium may be used to store software and data which causes the system to perform methods of the present disclosure. The above-mentioned machine-readable medium may include any suitable medium capable of storing and transmitting information in a form accessible by processing device, for example, a computer. Some examples of the machine-readable medium include, but are not limited to, magnetic disc storage, flash memory device, optical storage, random access memory, etc.
(51) With reference to
(52) The systems and methods of the present disclosure are especially useful when implemented on, or integrated with, an automated system, for example, a robotic system comprising a robotic arm. In particular, such automated system can be a hair harvesting, implantation or hair transplantation system.
(53) The various embodiments described above are provided by way of illustration only and should not be construed to limit the claimed disclosure. These embodiments are susceptible to various modifications and alternative forms, and it should be understood that the invention generally, as well as the specific embodiments described herein, cover all modifications, equivalents and alternatives falling within the scope of the appended claims. By way of non-limiting example, it will be appreciated by those skilled in the art that particular features or characteristics described in reference to one figure or embodiment may be combined as suitable with features or characteristics described in another figure or embodiment. Further, those skilled in the art will recognize that the devices, systems, and methods disclosed herein are not limited to one field, such as hair restoration, but may be applied to any number of fields. The description, therefore, is not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims.
(54) It will be further appreciated by those skilled in the art that the disclosure is not limited to editing of 3D models of heads not to medical/cosmetic applications, and that the gesture-based editing of 3D models according to the present disclosure may be also used for editing models of other real targets, either different parts of a person's body or even objects, and/or in different applications availing of computer graphics for processing and showing the appearance of a person or of an arrangement, still remaining within the scope of protection of the present disclosure.