METHOD OF GENERATING FACIAL EXPRESSION AND THREE-DIMENSIONAL (3D) GRAPHIC INTERFACE DEVICE USING THE SAME
20230143019 · 2023-05-11
Assignee
Inventors
Cpc classification
G06T19/20
PHYSICS
International classification
G06T19/20
PHYSICS
Abstract
A method of generating a facial expression and a three-dimensional (3D) graphic interface device therefor according to an exemplary embodiment of the present disclosure are provided. The method of generating a facial expression includes generating two or more component shapes corresponding to each action unit by using at least one action unit; generating a first component morphing set corresponding to a first action unit among the at least one action unit and a second component morphing set corresponding to a second action unit among the at least one action unit; and generating a result shape by combining at least a portion of the first component morphing set and at least a portion of the second component morphing set.
Claims
1. A method of generating a facial expression, comprising: generating two or more component shapes corresponding to each action unit by using at least one action unit; generating a first component morphing set corresponding to a first action unit among the at least one action unit and a second component morphing set corresponding to a second action unit among the at least one action unit; and generating a result shape by combining at least a portion of the first component morphing set and at least a portion of the second component morphing set; wherein each of the first component morphing set and the second component morphing set includes two or more generated component shapes.
2. The method of claim 1, wherein each component morphing set has a plurality of mesh regions composed of lines and vertices based on a human facial muscle structure.
3. The method of claim 2, wherein the generating of the result shape is combining a first set of component shapes corresponding to at least one component morphing of the first component morphing set and a second set of component shapes corresponding to at least one component morphing of the second component morphing set, wherein the combining of the first set of component shapes and the second set of component shapes includes, applying a preset first weight value to a parameter value of the first set of component shapes; applying a preset second weight value to a parameter value of the second set of component shapes; and calculating a result value by calculating a first weight parameter value that is the parameter value to which the first weight value is applied and a second weight parameter value that is the parameter value to which the second weight value is applied.
4. The method of claim 3, wherein the generating of the result shape includes, adjusting the calculated result value so as not to exceed a preset threshold value.
5. The method of claim 4, wherein the adjusting of the calculated result value so as not to exceed the threshold value includes: determining a ratio for adjusting the first weight parameter value based on a geometry regarding the first action unit; determining a ratio for adjusting the second weight parameter value based on a geometry regarding the second action unit; and adjusting each of the first weight parameter value and the second weight parameter value based on the determined ratio.
6. The method of claim 5, wherein the determining of the ratio includes: determining the first weight parameter value in consideration of at least one of a movement, a movement range, and a vector value of meshes constituting the geometry of the first action unit; and determining a ratio to which each of the second weight parameter values is applied in consideration of at least one of a movement, a movement range, and a vector value of meshes constituting the geometry of the second action unit.
7. A three-dimensional (3D) graphic interface device for generating a facial expression, comprising: a storage unit configured to store at least one face model; and a control unit configured to be connected to the storage unit and generate a facial expression, wherein the control unit is configured to, generate at least two or more component shapes corresponding to each action unit by using at least one action unit, generate a first component morphing set corresponding to a first action unit among the at least one action unit and a second component morphing set corresponding to a second action unit among the at least one action unit, and generate a result shape by combining at least a portion of the first component morphing set and at least a portion of the second component morphing set, wherein each of the first component morphing set and the second component morphing set includes two or more generated component shapes.
8. The 3D graphic interface device of claim 7, wherein each component morphing has a plurality of mesh regions composed of lines and vertices based on a human facial muscle structure.
9. The 3D graphic interface device of claim 8, wherein the control unit is configured to combine a first set of component shapes corresponding to at least one component morphing of the first component morphing set and a second set of component shapes corresponding to at least one component morphing of the second component morphing set, wherein the control unit is configured to calculate a result value by applying a preset first weight value to a parameter value of the first set of component shapes, applying a preset second weight value to a parameter value of the second set of component shapes, and calculating a first weight parameter value that is the parameter value to which the first weight value is applied and a second weight parameter value that is the parameter value to which the second weight value is applied, in order to combine the first set of component shapes and the second set of component shapes.
10. The 3D graphic interface device of claim 9, wherein the control unit is configured to adjust the calculated result value so as not to exceed a preset threshold value.
11. The 3D graphic interface device of claim 10, wherein the control unit is configured to, determine a ratio for adjusting the first weight parameter value based on a geometry regarding the first action unit; determine a ratio for adjusting the second weight parameter value based on a geometry regarding the second action unit; and adjust each of the first weight parameter value and the second weight parameter value based on the determined ratio.
12. The 3D graphic interface device of claim 10, wherein the control unit is configured to, determine a ratio to which the first weight parameter value is applied in consideration of at least one of a movement, a movement range, and a vector value of meshes constituting the geometry of the first action unit, and determine a ratio to which the second weight parameter value is applied in consideration of at least one of a movement, a movement range, and a vector value of meshes constituting the geometry of the second action unit.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0052] Advantages and features of the present disclosure and methods to achieve them will become apparent from descriptions of exemplary embodiments herein below with reference to the accompanying drawings. However, the present disclosure is not limited to the exemplary embodiments disclosed herein but may be implemented in various different forms. The exemplary embodiments are provided to make the description of the present disclosure thorough and to fully convey the scope of the present disclosure to those skilled in the art. It is to be noted that the scope of the present disclosure is defined only by the claims. In connection with the description of drawings, the same or like reference numerals may be used for the same or like elements.
[0053] In the disclosure, expressions “have,” “may have,” “include” and “comprise,” or “may include” and “may comprise” used herein indicate presence of corresponding features (for example, elements such as numeric values, functions, operations, or components) and do not exclude the presence of additional features.
[0054] In the disclosure, expressions “A or B,” “at least one of A or/and B,” or “one or more of A or/and B,” and the like used herein may include any and all combinations of one or more of the associated listed items. For example, the “A or B,” “at least one of A and B,” or “at least one of A or B” may refer to all of case (1) where at least one A is included, case (2) where at least one B is included, or case (3) where both of at least one A and at least one B are included.
[0055] The expressions, such as “first,” “second,” and the like used herein, may refer to various elements of various exemplary embodiments of the present disclosure, but do not limit the order and/or priority of the elements. Furthermore, such expressions may be used to distinguish one element from another element. For example, “a first user device” and “a second user device” indicate different user devices regardless of the order or priority. For example, without departing from the scope of the present disclosure, a first element may be referred to as a second element, and similarly, a second element may also be referred to as a first element.
[0056] It will be understood that when an element (for example, a first element) is referred to as being “(operatively or communicatively) coupled with/to” or “connected to” another element (for example, a second element), it can be understood as being directly coupled with/to or connected to another element or coupled with/to or connected to another element via an intervening element (for example, a third element). On the other hand, when an element (for example, a first element) is referred to as being “directly coupled with/to” or “directly connected to” another element (for example, a second element), it should be understood that there is no intervening element (for example, a third element).
[0057] According to the situation, the expression “configured to (or set to)” used herein may be replaced with, for example, the expression “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” or “capable of”. The term “configured to (or set to)” must not mean only “specifically designed to” in hardware. Instead, the expression “a device configured to” may mean that the device is “capable of operating together with another device or other components. For example, a “processor configured to (or set to) perform A, B, and C” may mean a dedicated processor (for example, an embedded processor) for performing a corresponding operation or a generic-purpose processor (for example, a central processing unit (CPU) or an application processor) which may perform corresponding operations by executing one or more software programs which are stored in a memory device.
[0058] Terms used in the present disclosure are used to describe specified exemplary embodiments of the present disclosure and are not intended to limit the scope of other exemplary embodiments. The terms of a singular form may include plural forms unless otherwise specified. All the terms used herein, which include technical or scientific terms, may have the same meaning that is generally understood by a person skilled in the art. It will be further understood that terms which are defined in a dictionary among terms used in the disclosure, can be interpreted as having the same or similar meanings as those in the relevant related art and should not be interpreted in an idealized or overly formal way, unless expressly defined in the present disclosure. In some cases, even in the case of terms which are defined in the specification, they cannot be interpreted to exclude exemplary embodiments of the present disclosure.
[0059] Features of various exemplary embodiments of the present disclosure may be partially or fully combined or coupled. As will be clearly appreciated by those skilled in the art, technically various interactions and operations are possible, and respective exemplary embodiments may be implemented independently of each other or may be implemented together in an associated relationship.
[0060] Hereinafter, in order to help understanding of the disclosures presented in the present specification, terms used in the present specification will be briefly summarized.
[0061] In the present specification, a facial action coding system (FACS) is a method of analyzing a human facial expression based on human's anatomical facial muscles, and includes an action unit and facial action descriptors.
[0062] In the present specification, the action unit refers to a basic unit of a facial expression which is formed by an individual facial muscle or a combination of a plurality of facial muscles. The facial expression may be formed by an action unit alone or by a combination of two or more action units.
[0063] In the present specification, the term ‘morphing’ (blend shapes) refers to a technique capable of generating a shape of a new facial expression through linear interpolation between a basic facial expression model (e.g., expressionless) and a model of another facial expression.
[0064] In the present specification, a geometry refers to a three-dimensional object represented as a mesh, which is a three-dimensional surface created through three-dimensional modeling using points, lines, and surfaces (polygons).
[0065] In the present specification, a model refers to a head object which is comprised of a geometry. This model has a basic facial expression, or each facial expression on the FACS.
[0066] In the present specification, a component shape is a model in which only a specific region expresses a specific facial expression in the basic facial expression model. One facial expression model may be divided into several component shapes, and its original facial expression model is obtained when all the divided component shapes are morphed.
[0067] Hereinafter, various exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
[0068]
[0069] Referring to
[0070] The 3D graphic interface device 300 may provide various interface screens for generating a facial expression using a face model, and the various interface screens may be interface screens which are related to a 3D graphic tool. For example, an interface screen for generating a facial expression may be an interface screen which is related to a plug-in and an add-on which are applied to a 3D graphic tool.
[0071] Specifically, the 3D graphic interface device 300 includes a communication unit 310, an input unit 320, a display unit 330, a storage unit 340, and a control unit 350.
[0072] The communication unit 310 connects the 3D graphic interface device 300 to an external device such that the 3D graphic interface device 300 communicates with the external device using wired/wireless communication.
[0073] The input unit 320 may include a mouse or keyboard which may receive a command or data to be used for a component (e.g., the control unit 350) of the 3D graphic interface device 300 from an outside (e.g., a user) of the 3D graphic interface device 300.
[0074] The display unit 330 may display various contents to the user. For example, the display unit 330 may display various interface screens for generating a facial expression using a face model.
[0075] The storage unit 340 may store various data used to generate facial expressions.
[0076] In various exemplary embodiments, the storage unit 340 may include at least one type of storage medium among a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g., an SD or XD memory or the like), a random access memory (RAM), a static random access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk. The 3D graphic interface device 300 may operate in relation to a web storage which performs a storage function of the storage unit 340 on the Internet.
[0077] The control unit 350 is operatively connected to the communication unit 310, the input unit 320, the storage unit 340, and the display unit 330, and may perform various instructions for generating a facial expression using a face model.
[0078] In order to prevent an abnormal facial expression from being caused when two or more morphings are combined using the prior art, the control unit 350 of the 3D graphic interface device according to an exemplary embodiment of the present disclosure generates a facial expression by a method of generating component shapes which are obtained by dividing the action unit according to areas of respective facial muscles so that a new facial expression (for example, reference numeral 230) is expressed within a movement range in which human facial muscles can be movable; creating component morphing sets by morphing these component shapes to a basic facial expression; and then, combining a plurality of the component morphing sets.
[0079] Hereinafter, operations of the control unit 350 described above will be described in detail with reference to
[0080]
[0081] Referring to
[0082] Specifically, referring to
[0083] Referring to
[0084] Furthermore, when two or more different action units are expressed together, the component shapes corresponding to each action unit operate by morphing. Next, the control unit 350 may generate a new facial expression by morphing the component shapes corresponding to a first action unit and the component shapes corresponding to a second action unit.
[0085] At this time, when both the component shapes corresponding to the first action unit and the component shapes corresponding to the second action unit are morphed at 100% intensity, since a muscle contraction limit is exceeded in an area where facial muscles used in the two action units overlap and thus, an abnormal and inaccurate face shape comes out, morphing parameters of a component model corresponding to the area where the facial muscles overlap are adjusted to thereby generate a facial expression which is desired by a manufacturer. In existing methods, whenever two or more action units are morphed to compensate for an inaccurate face shape, corrective shapes to supplement this shape are newly manufactured one by one and additionally morphed to be used. As this can be solved by simply adjusting the parameters, the time required to manufacture corrective shapes can be reduced by several orders of magnitude.
[0086] This will be described in detail with reference to
[0087] First, referring to
[0088] The control unit 350 combines a first set of component shapes 605 corresponding to at least one component morphing of the first component morphing set 600 and a second set of component shapes 615 corresponding to at least one component morphing of the second component morphing set 610. At this time, points 605 and 615 correspond to facial muscle structures at the same location.
[0089] In this case, the control unit 350 applies a preset first weight value to a parameter of the first set of component shapes 605, and applies a preset second weight value to a parameter of the second set of component shapes 615. Next, the control unit 350 calculates a result value by calculating a value of the parameter to which the first weight value is applied and a value of the parameter to which the second weight value is applied. In this case, the calculated result value does not exceed a preset threshold result value in association with a new facial expression. Here, the threshold result value means a value set so that a result obtained by combining two or more action units does not exceed a contraction limit of facial muscles of the new facial expression and thus, an inaccurate appearance is not generated.
[0090] The control unit 350 may adjust the result value calculated by a first weight parameter value, which is the parameter to which the first weight value is applied, and a second weight parameter value, which is the parameter to which the second weight value is applied, so as not to exceed the threshold value. In this manner, in order to adjust at least one of the weight parameter values, the control unit 350 may determine a ratio for adjusting the weight parameter values based on a geometry regarding the first action unit and a geometry regarding the second action unit. For example, the control unit 350 may determine a ratio to which each of the first weight parameter value and the second weight parameter value is applied in consideration of a position, a movement range, a vector value and the like, of meshes constituting the geometry of each of the first action unit and the second action unit, and may apply each of a first weight vector and a second weight vector based on the determined ratio.
[0091] Hereinafter, result shapes of new facial expressions generated according to the present exemplary embodiment will be described with reference to
[0092]
[0093] Referring to
[0094] Specifically, the control unit 350 generates the first component morphing set 200 corresponding to the first action unit, the second component morphing set 210 corresponding to the second action unit, and the third component morphing set 220 corresponding to the third action unit, and operates respective morphing parameters which belong to the first component morphing set 200, the second component morphing set 210 and the third component morphing set 220. In more detail, the control unit 350 applies a preset first weight value to a parameter value corresponding to at least a portion of a plurality of component morphings of the first component morphing set 200, applies a preset second weight value to a parameter value corresponding to at least a portion of the plurality of component morphings of the second component morphing set 210, and applies a preset third weight value to a parameter value corresponding to at least a portion of a plurality of component morphings of the third component morphing set 220. Subsequently, the control unit 350 forms the result shape 700 as a result of morphing using the parameter values of the component morphings to which the corresponding weight values are applied.
[0095] In the result shape 700 formed in this manner, as compared to the new facial expression 230 described in
[0096] Referring to
[0097] In the result shape 810 according to the present exemplary embodiment, as compared to the result shape 800 using the existing method, interference caused by a geometric structure in morphings between two or more action units is removed, so that expressions around eyes are expressed as being more natural and similar to real facial expressions.
[0098] Hereinafter, a method of generating a facial expression according to the present exemplary embodiment will be described with reference to
[0099]
[0100] Referring to
[0101] Next, the control unit 350 generates a result shape by combining at least a portion of the first component morphing set and at least a portion of the second component morphing set, in step S920. The result shape may be formed by calculating movement values of each of the first component morphing set and the second component morphing set.
[0102] In order to generate a result shape by combining at least a portion of the first component morphing set and at least a portion of the second component morphing set, the control unit 350 may apply a preset first weight value to a parameter value corresponding to at least a portion of a plurality of component morphings of the first component morphing set 200, and apply a preset second weight value to a parameter value corresponding to at least a portion of a plurality of component morphings of the second component morphing set 210.
[0103] Subsequently, the control unit 350 forms a result shape as a result of morphings calculated by using the parameter values of component morphings to which the corresponding weight values are applied.
[0104] Hereinafter, an interface screen of a 3D graphic interface device for generating a result shape of a new facial expression by combining two or more component morphing sets will be described with reference to
[0105]
[0106] Referring to
[0107] The control unit 350 may combine at least a portion of the first component morphing set corresponding to the first action unit and at least a portion of the second component morphing set according to a request for combining two or more component morphing sets, and display a combined result shape in the first graphic area 1010. In combining, weight parameter values applied to respective component morphings may be displayed in the second graphic area 1020 as shown in
[0108] The graphic object 1050 may be used to adjust the weight parameter values which are applied to respective component morphings. Here, the graphic object 1050 may be expressed in the form of a slide bar, but is not limited thereto.
[0109] When the weight parameter value is adjusted according to a user input, the control unit 350 may apply the adjusted weight parameter value to the parameter value corresponding to at least a portion of the plurality of component morphings of the first component morphing set, and display a result shape to which the weight parameter value is applied in the first graphic area 1010.
[0110] As such, in the present disclosure, by adjusting the weight parameter value in association with each morphing when two or more component morphing sets are combined, interference caused by geometric facial muscle structures in morphings between two or more action units is removed, so that a result shape can be expressed similarly to an actual facial expression.
[0111] An apparatus and a method according to an exemplary embodiment of the present disclosure may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer readable medium. The computer readable medium may include program instructions, data files, data structures, and the like, alone or in combination.
[0112] The program instructions recorded on the computer readable medium may be those specially designed and configured for the present disclosure, or may be those known and available to a person having ordinary skill in the computer software field. Examples of the computer readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, and hardware devices specially configured to store and execute program instructions, such as ROMs, RAMs, flash memories, and the like. Examples of program instructions include not only machine language codes such as those generated by a compiler, but also include high-level language codes that can be executed by a computer using an interpreter or the like.
[0113] The hardware devices described above may be configured to operate as one or more software modules to perform operations of the present disclosure, and vice versa.
[0114] Although the exemplary embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, it is to be understood that the present disclosure is not limited to those exemplary embodiments and various changes and modifications may be made without departing from the scope of the present disclosure. Therefore, the exemplary embodiments disclosed in the present disclosure are intended to illustrate rather than limit the scope of the present disclosure, and the scope of the technical idea of the present disclosure is not limited by these exemplary embodiments. Therefore, it should be understood that the above-described exemplary embodiments are illustrative in all aspects and not restrictive. The scope of the present disclosure should be construed according to the claims, and all technical ideas in the scope of equivalents should be construed as falling within the scope of the present disclosure.