CHEWING ASSISTANCE SYSTEM
20230038875 · 2023-02-09
Assignee
Inventors
Cpc classification
G06T7/246
PHYSICS
G16H50/20
PHYSICS
G16H50/70
PHYSICS
A61B5/1123
HUMAN NECESSITIES
G16H50/30
PHYSICS
International classification
A61B5/00
HUMAN NECESSITIES
A61B5/11
HUMAN NECESSITIES
Abstract
Provided are moving image obtaining means that obtains a moving image of a region including at least a mouth or a peripheral portion of the mouth in a face, analysis means that analyzes a chewing action based on the moving image of the region obtained by the moving image obtaining means, quality determination means that determines quality of the chewing action based on information of the chewing action analyzed by the analysis means, and extraction means that extracts assistance information corresponding to the chewing quality determined by the quality determination means, from chewing information storage means.
Claims
1. A chewing assistance system comprising an information processing device that includes: chewing information storage means that stores information about chewing quality; moving image obtaining means that obtains a moving image of a region including at least a mouth or a peripheral portion of the mouth in a face; analysis means that analyzes a chewing action based on the moving image of the region obtained by the moving image obtaining means; quality determination means that determines quality of the chewing action based on information of the chewing action analyzed by the analysis means; and extraction means that extracts assistance information corresponding to the chewing quality determined by the quality determination means, from the chewing information storage means.
2. The chewing assistance system according to claim 1, wherein the analysis means includes feature detection means that detects a feature point in a face from an image of the region, and action analysis means that analyzes an action based on change of the feature point detected by the feature detection means.
3. The chewing assistance system according to claim 2, wherein the action analysis means determines, in a case where a quantity of change of the feature point indicates a value that exceeds a predetermined threshold value, that the change is caused by chewing, and analyzes the action of the chewing.
4. The chewing assistance system according to claim 2, wherein the feature point includes at least one of a nasal tip, a nasion, a corner of a mouth, a vertex of an upper lip, a vertex of a lover lip, a vertex of a jaw, and a point along an outline of a cheek near masseter.
5. The chewing assistance system according to claim 2, wherein the change of the feature point includes at least one of change of a position of the feature point, change of a distance between two feature points, and change of an area surrounded by three or more feature points.
6. The chewing assistance system according to claim 1, wherein the chewing action analyzed by the analysis means includes an action associated with at least one of a total number of chewing times, chewing rhythm, a motion of a mouth, a motion of a jaw, occlusal balance between anterior and posterior sides/between left and right sides, and a motion of masseter.
7. The chewing assistance system according to claim 1, wherein the quality of the chewing action determined by the quality determination means includes quality based on at least one of determinations as to whether a total number of chewing times is large or small, whether chewing rhythm is proper, whether mouth opening behavior is proper, whether chewing balance between a left side and a right side is proper, whether eating behavior (motion of a mouth) is proper, and whether use of masseter is proper.
8. The chewing assistance system according to claim 1, wherein the quality determination means compares a chewing action with a previous chewing action of a same person and determines whether the chewing action has improved.
9. The chewing assistance system according to claim 1, wherein the quality determination means has a machine learning mechanism, and the quality of the chewing action is determined with reference to a learning result from the machine learning mechanism.
10. (canceled)
11. A computer-readable recording medium for use in an information processing device, the recording medium having a control program recorded thereon for causing the information processing device to function as the chewing assistance system according to claim 1, the control program including a chewing assistance program causing the information processing device to function as the moving image obtaining means, the analysis means, the quality determination means, and the extraction means.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
DESCRIPTION OF EMBODIMENTS
[0061] Next, an embodiment of the present invention will be described in detail with reference to the accompanied drawings.
[0062] Chewing during a meal is associated with a person's favorite hardness/softness of food, a motion of biting through and masticating the food, the number of times of chewing the food, a chewing time, rhythm, and the like. Balance between chewing teeth is also among them. For determining chewing quality based on whether or not such a function of chewing and eating deliciously is proper, the system of the present invention measures, as chewing quality, the number of chewing times, chewing rhythm, eating behavior (grinding? moving only in the up-downward direction? lips remain opened?), a motion of the jaw during chewing, an occlusal balance (among the anterior, the posterior, the left, and the right sides), a motion of muscle, and the like by analyzing moving images, to indicate the chewing quality as, for example, a numerical value, and further indicates change with the elapse of time according to difference between the past state and the present state, for example. Therefore, the system of the present invention can present an improving state of the chewing quality.
[0063] Specifically, as illustrated in
[0064] The processing unit 2 includes a CPU such as a microprocessor as a main unit and also has a not-illustrated storage unit, such as a RAM and a ROM, in which a program for providing procedures of various processing operations, and process data are stored. The storage means 3 includes a memory, a hard disk, and the like disposed inside and/or outside the information processing device 10. A part or all of contents in the storage unit may be stored in, for example, a hard disk or a memory of another computer that is connected to the information processing device 10 so as to be communicable with each other. The information processing device having such a configuration may be a dedicated device that is installed in a dental clinic, a hospital, another institution, a store, or the like, or may be a general-purpose household personal computer. The information processing device may be, for example, a smartphone carried by a user.
[0065] The processing unit 2 includes, as its functions, a moving image obtaining unit 21 as moving image obtaining means, an analysis unit 22, a quality determination unit 23 as quality determination means, an information extraction unit 24, and an information output processing unit 25. The moving image obtaining unit 21 obtains two-dimensional or three-dimensional moving image information, of a region including at least a mouth or a peripheral portion of the mouth in a face of a user, which is obtained and transmitted by the imaging means 4, and stores the moving image information in an image information storage unit 31a of a user information storage unit 31. The analysis unit 22 analyzes a chewing action based on the moving image information, and stores information of the analyzed chewing action in an action information storage unit 31b of the user information storage unit 31. The quality determination unit 23 determines quality of the chewing action based on the information of the chewing action, and stores information of the determined quality of the chewing action in a determination information storage unit 31c of the user information storage unit 31. The information extraction unit 24 receives input of the information of the determined chewing quality, and extracts information to be recommended from information, about the chewing quality, stored in a chewing information storage unit 32. The information output processing unit 25 presents the information to the user by, for example, displaying the information on a display (information display unit 5). These processing functions are executed by the above-described program.
[0066] The imaging means 4 is implemented by a CCD camera or the like, and may be a CCD camera included in a smartphone, of a user configured as the information processing device 10 or may also be implemented by, for example, an external camera connected to, for example, a dedicated computer device as the information processing device 10. 3D imaging or the like can be utilized. The imaging means 4 obtains moving image information of a region including at least a mouth or a peripheral portion of the mouth in a face of a user preferably, a region including a nasal tip, a nasion, a corner of a mouth, a vertex of an upper lip, a vertex of a lower lip, a vertex of a jaw, and a cheek near masseter, each of which serves as “feature point” described below.
[0067] The analysis unit 22 functions as analysis means, and more specifically includes a feature detection processing unit 22a, a feature quantity calculation processing unit 22b, and an action analysis processing unit 22c. The feature detection processing unit 22a detects feature points in a face from the image of the above-described region, and stores position information thereof in a feature point information storage unit 310 of the action information storage unit 31b. The feature quantity calculation processing unit 22b calculates a feature quantity to be used for analyzing an action, based on the positions of the detected feature points, and stores the feature quantity in a feature quantity storage unit 311. The action analysis processing unit 22c analyzes an action based on change of the calculated feature quantity.
[0068] The feature detection processing unit 22a can use, as a technique for detecting a face feature point, various known methods such as an active shape model (ASM), an active appearance model (AAM), and a constrained local model (CLM). As the feature points to be detected in a face, a nasal tip, a nasion, a corner of a mouth, a vertex of an upper lip, a vertex of a lower lip, a vertex of a jaw, and a point along an outline of a cheek near masseter are preferably used.
[0069]
[0070] Preferable examples of the feature quantity calculated by the feature quantity calculation processing unit 22b include relative positions and distances of points such as left and right mouth corners (feature points), a vertex (feature point) of an upper lip, a vertex (feature point) of a lower lip, and a vertex (feature point) of a jaw, relative to a nasal tip (feature point) or a nasion (feature point) which is not affected during chewing, and a distance as a thickness of a lip between a vertex (feature point) of an upper lip and a vertex (feature point) of a lower lip, a distance as a width of a lip between left and right mouth corners (feature points), and areas which correspond to masseter and are surrounded by mouth corners (feature points) and a plurality of predetermined positions (feature points) along an outline of a cheek.
[0071]
[0072] Based on such a graph representing the calculated change of the distance (feature quantity) from the nasal tip “33” to the vertex “51” of the upper lip as illustrated in
[0073] Examples in which the feature quantity calculation processing unit 22b similarly calculates the feature quantity are as follows. For example, a relative position coordinate of a vertex “57” of a lower lip relative to the nasal tip “33” and a distance therebetween are each calculated as the feature quantity as illustrated in
[0074] Thus, the chewing action, specifically, the number of chewing times, rhythm, chewing balance between the left side and the right side, and the like can be analyzed by analyzing a relative position and a distance of each of points such as left and right mouth corners, a vertex of an upper lip, a vertex of a lower lip, arid a vertex of a jaw relative to a nasal tip or a nasion which is not affected during chewing.
[0075] Furthermore, examples in which the feature quantity calculation processing unit 22b similarly calculates the feature quantity are as follows A relative position coordinate of the right mouth corner “48” relative to the left mouth corner “54” and a distance therebetween are each calculated as the feature quantity as illustrated in
[0076] Other examples in which the feature quantity calculation processing unit 22b calculates the feature quantity are as follows. A left-side area and a right-side area which correspond to masseter and which are surrounded by mouth corners and a plurality of predetermined positions (feature points) along an outline of a cheek are calculated as the feature quantities as illustrated in
[0077] In the upper region illustrated in
[0078] Thus, in a case where a left-side area and a right-side area surrounded by mouth corners and a plurality of predetermined positions along an outline of a cheek are calculated as the feature quantities, the action analysis processing unit 22c can more directly analyze, for example, a way of exerting a force in a mouth during chewing, i.e., motion of masseter, and chewing balance, caused by the motion, between the anterior and posterior sides/between the left and right sides.
[0079] Furthermore, the motions of the respective feature points are defined as patterns based on position coordinates, of mouth corners, the jaw, the vertexes of the upper lip and the lower lip, and the like, calculated by the feature quantity calculation processing unit 22b, and pattern determination may be performed by a machine learning mechanism or the like to analyze a chewing action. The motion of each feature point is preferably extracted such that, for example, the V shape is determined as one chewing action and a motion in the chewing action is extracted as illustrated in
[0080] Thus, by defining, as a pattern, a trajectory of each of the feature points during chewing, motion (mouth opening behavior, chewing balance between the left side and the right side, and the like) of a mouth during chewing can be more accurately analyzed. Preferably, the action analysis processing unit 22c has such a machine learning mechanism, and determines each of the above-described actions with reference to learning results from the machine learning mechanism.
[0081] The quality of the chewing action determined by the quality determination unit 23 includes, for example, quality based on determination as to whether the number of chewing times is large or small, whether or not chewing rhythm is proper, whether or not mouth opening behavior is proper, whether or not chewing balance between the left side and the right side is proper, whether or not eating behavior (motion of a mouth) is proper, and whether or not use of masseter is proper.
[0082] For example, information as to whether or not a user has improved chewing quality as compared with the her/his previous quality and information as to whether or not the chewing quality is commensurate with the age are preferably obtained so as to be included in the quality, according to the obtained data, previous information for the user in the determination information storage unit 31c, and age-based statistical information. Preferably, the quality determination unit 23 has a machine learning mechanism, and determines the quality of the chewing action with reference to learning results from the machine learning mechanism.
[0083] The information extraction unit 24 functions as extraction means. For example, if the chewing quality is not commensurate with the age, the information extraction unit 24 preferably extracts information such as age-based oral cavity function information, and information about a device for growing/improving purpose and a medical specialist based on a residence of the user.
[0084]
[0085] Firstly, the moving image obtaining unit 21 obtains, from the imaging means 4, moving image information of a face of a user at least from putting of prescribed food (predetermined food) or ordinary food in a mouth up to swallowing (S101), and stores the moving image information in the image information storage unit 31a of the user information storage unit 31 (S102).
[0086] Subsequently, the feature detection processing unit 22a detects feature points in the face from the image of the region (S103), and stores position information of the feature points in the feature point information storage unit 310 of the action information storage unit 31b (S104). The feature quantity calculation processing unit 22b calculates the feature quantity to be used for analyzing an action based on the positions of the detected feature points (S105) and stores the feature quantity in the feature quantity storage unit 311 (S106).
[0087] Subsequently, the action analysis processing unit 22c analyzes an action based on change of the calculated feature quantity (S107), and stores an analysis result in the analysis result storage unit 312 (S108). Subsequently, the quality determination unit 23 determines quality of the chewing action based on the analysis result (S109), and stores information of the determined quality of the chewing action in the determination information storage unit 31c of the user information storage unit 31 (S110).
[0088] Subsequently, the information extraction unit 24 receives input of the information of the determined chewing quality and extracts information to be recommended from information of chewing quality stored in the chewing information storage unit 32 (S111). The information output processing unit 25 presents the extracted information to the user by, for example, displaying the information on a display (the information display unit 5) (S112).
[0089] Although the embodiment of the present invention has been described above, the present invention is not limited to the embodiment at all. For example, instead of the processing unit being configured by software processing performed by a computer, it is also preferable that a part or the entirety of the processing unit is configured by a hardware processing circuit. In this case, a processing circuit for artificial intelligence can also be used as the machine learning mechanism, and it is needless to say that the present invention can be implemented in various modes without departing from the gist of the present invention.
INDUSTRIAL APPLICABILITY
[0090] According to the present invention, chewing quality is determined in simple means in which a moving image of a mouth or a peripheral portion of the mouth is taken, and assistance information based on the determination result can be provided. Therefore, by combining the present invention with instruments and commodities/services for education and training of chewing for children, commodities and services contributing to healthy development of children can be provided. Furthermore, by combining the present invention with cosmetic training instruments and services for, for example, use of masticatory muscle and well-balanced chewing among the anterior, the posterior, the left, and the right sides, cosmetic commodities and services for preventing distortion of the face and obesity, and maintaining vital healthy facial expression can also be provided. Moreover, by combining the present invention with commodities and services for addressing oral frailty such as deterioration of an oral cavity function and weakening of the body for elderly people and the like, commodities and services contributing to extension of healthy life expectancy can also be provided.
DESCRIPTION OF THE REFERENCE CHARACTERS
[0091] 1 chewing assistance system
[0092] 2 processing unit
[0093] 3 storage means
[0094] 4 imaging means
[0095] 5 information display unit
[0096] 10 information processing device
[0097] 21 moving image obtaining unit
[0098] 22 analysis unit
[0099] 22a feature detection processing unit
[0100] 22b feature quantity calculation processing unit
[0101] 22c action analysis processing unit
[0102] 23 quality determination unit
[0103] 24 information extraction unit
[0104] 25 information output processing unit
[0105] 31 user information storage unit
[0106] 31a image information storage unit
[0107] 31b action information storage unit
[0108] 31c determination information storage unit
[0109] 32 chewing information storage unit
[0110] 310 feature point information storage unit
[0111] 311 feature quantity storage unit
[0112] 312 analysis result storage unit