METHOD AND SYSTEM FOR ANALYZING HAIR CHARACTERISTICS
20240404094 ยท 2024-12-05
Assignee
Inventors
- Sin-Ye Jhong (New Taipei City, TW)
- Chih-Hsien Hsia (Taipei City, TW)
- Chun-Wei Chen (New Taipei City, TW)
Cpc classification
G06V20/70
PHYSICS
A61B5/0059
HUMAN NECESSITIES
A61B5/448
HUMAN NECESSITIES
G06V10/7715
PHYSICS
A61B5/7264
HUMAN NECESSITIES
G06V10/42
PHYSICS
A61B5/446
HUMAN NECESSITIES
International classification
G06V10/42
PHYSICS
G06V10/77
PHYSICS
Abstract
A hair feature analysis method and system are provided. The hair feature analysis method includes the steps of capturing high magnification and low magnification images of a scalp area and performing preprocessing on these images. The preprocessed high magnification and low magnification images are then input into an artificial intelligence model to simultaneously detect hair follicles and calculate hair widths in the scalp area. Hair characteristics are then calculated based on the analyzed high magnification and low magnification images. The artificial intelligence model can be an R-CNN model or a variant thereof.
Claims
1. A hair feature analysis method, comprising: capturing a high magnification image and a low magnification image of a scalp area, providing detailed information about follicles by the high magnification images, providing an overall view of hair density and distribution by the low magnification images; preprocessing the high magnification and low magnification images, wherein the step of preprocessing the captured high magnification and low magnification images includes denoising, resizing, normalizing, or recapturing images; inputting the preprocessed high magnification image and low magnification image into an artificial intelligence model; calculating at least one hair feature from the high magnification image and the low magnification image analyzed by the artificial intelligence model, wherein the hair feature includes average hair diameter, thin hair density ratio, or total hair density ratio; and wherein the artificial intelligence model is an HTC enhanced model that is modified from a hybrid task cascade model, and the HTC enhanced model has a plurality of following features: replacing a semantic segmentation module in the hybrid task cascade model with a global feature enhancement module; connecting the refined semantic features from the global feature enhancement module with multiple framework branches; and introducing at least one framework-mask enhancement module connecting the above-mentioned framework branches and at least one mask branch.
2. (canceled)
3. The hair feature analysis method according to claim 1, wherein the global feature enhancement module includes multiple convolutional layers with different kernel sizes to extract multi-scale features.
4. The hair feature analysis method according to claim 1, wherein the HTC enhanced model has the framework branch and the mask branch at each stage, and the number of the framework-mask enhancement modules is multiple, each framework-mask enhancement module is connected between the framework branch and the mask branch at each stage.
5. The hair feature analysis method according to claim 1, wherein the HTC enhanced model has the framework branch at each stage but only has one mask branch, and the framework-mask enhancement module is configured between the last framework branch and the mask branch, serving to connect the last framework branch and the mask branch.
6. (canceled)
7. (canceled)
8. (canceled)
9. The hair feature analysis method according to claim 1, further comprising: displaying the analyzed image and calculated hair features to the user; storing the analyzed image and calculated hair features in a database; or generating a report based on the calculated hair features.
10. (canceled)
11. A hair feature analysis system, comprising: an image capturing device for capturing a high magnification image and a low magnification image of a scalp area, wherein the high magnification images provide detailed information about follicles, and the low magnification images provide an overall view of hair density and distribution; a preprocessing module for preprocessing the captured high magnification image and low magnification image, wherein the step of preprocessing the captured high magnification and low magnification images includes denoising, resizing, normalizing, or recapturing images; an artificial intelligence model for simultaneously detecting multiple hair follicles and calculating multiple hair widths in the high magnification and low magnification images; and an analysis module for receiving the high magnification and low magnification images analyzed by the artificial intelligence model and calculating at least one hair feature, wherein the hair feature includes average hair diameter, thin hair density ratio, or total hair density ratio; wherein the artificial intelligence model is an HTC enhanced model, which is modified from a hybrid task cascade model, and the HTC enhanced model has a plurality of following features: replacing a semantic segmentation module in the hybrid task cascade model with a global feature enhancement module; connecting the refined semantic features from the global feature enhancement module with multiple framework branches; and introducing at least one framework-mask enhancement module, connecting the above-mentioned framework branches and at least one mask branch.
12. (canceled)
13. The hair feature analysis system according to claim 11, wherein the global feature enhancement module includes multiple convolutional layers with different kernel sizes to extract multi-scale features.
14. The hair feature analysis system according to claim 11, wherein the HTC enhanced model has the framework branch and the mask branch at each stage, and the number of the framework-mask enhancement modules is multiple, each framework-mask enhancement module is connected between the framework branch and the mask branch at each stage.
15. The hair feature analysis system according to claim 11, wherein the HTC enhanced model has the framework branch at each stage but only has one mask branch, the framework-mask enhancement module is configured between the last framework branch and the mask branch, and the framework-mask enhancement module connects the last framework branch and the mask branch.
16. (canceled)
17. (canceled)
18. (canceled)
19. The hair feature analysis system according to claim 11, further comprising: a display device for displaying the analyzed image and calculated hair features to the user; a storage module for storing the analyzed image and calculated hair features in a database; or a report generation module for generating a report based on the calculated hair features.
20. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The objects, spirits, and advantages of the preferred embodiments of the present invention will be readily understood by the accompanying drawings and detailed descriptions, wherein:
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION OF THE INVENTION
[0023] In order to describe in detail the technical content, structural features, achieved objectives and effects of the instant application, the following detailed descriptions are given in conjunction with the drawings and specific embodiments. It should be understood that these embodiments are only used to illustrate the application and not to limit the scope of the instant application.
[0024] Please refer to
[0025] The artificial intelligence model 130 is responsible for simultaneously detecting hair follicles from the preprocessed high-magnification and low-magnification images and calculating hair width. This artificial intelligence model 130 can be an R-CNN model or other variants of the R-CNN model, and more detailed introduction about what are the variants of the R-CNN model will be provided later. In addition, the analysis module 140 is responsible for analyzing the image and calculating hair features based on the output of the artificial intelligence model 130. Using the information provided by the artificial intelligence model 130, the analysis module 140 can perform further analysis, calculate the hair features of the scalp area, and provide personalized suggestions for hair care to understand the health of the hair and discover potential problems related to hair. The collaboration between this artificial intelligence model 130 and the analysis module 140 ensures a comprehensive and accurate assessment of hair health and characteristics.
[0026] Next, please refer to
[0027] Please refer to
[0028] In step S240, the preprocessing module 120 will command the image capture device 110 to automatically recapture the image or notify the operator of the hair feature analysis system 100 to manually recapture the image. When a new image is captured, the process returns to step S220 for preprocessing. In step S250, the preprocessed high-magnification and low-magnification images are sent to the artificial intelligence model 130. Afterwards, the process continues with hair feature analysis, the details of which will be described in the following paragraphs.
[0029] Please refer to
[0030] In summary, the HTC model 300 uses the Feature Pyramid Network 340 to extract multi-scale features from the input image. Then, the Region Proposal Network 350 generates a set of region proposals. Each stage refines the region proposals, outputs the final bounding box coordinates of the detected objects, and predicts a binary mask for each detected object in the refined bounding box. The semantic segmentation branch 330 provides additional spatial context to improve object detection and segmentation in cluttered backgrounds. The output of the HTC model 300 includes multiple detected hair follicles and multiple hairs, which are subsequently used by the analysis module 140 to calculate various hair features, such as average hair diameter, fine hair density ratio, and total hair density ratio. For a more detailed description of the Hybrid Task Cascade model, please refer to the paper Hybrid Task Cascade for Instance Segmentation (arxiv.org/abs/1901.07518).
[0031] Please refer to
[0032] Please refer to
[0033] Please refer to
[0034] Afterwards, the output tensor of the fully connected layer 462 is transformed from Nx40*96 to Nx256*14*14 by the reshape layer 464. This operation allows the tensor to match the size required by the following 1313 convolution layer 466. The output of the reshape layer 464 is connected to the 1313 convolution layer 466, which in turn connects to the mask branch 320. By connecting the features of the bounding box branch 310 to the mask branch 320 in this way, the box-mask enhancement module 460 can make the predictions of the mask branch 320 more accurate. Please note that in
[0035] Next, two embodiments of the configuration of the box-mask enhancement module will be introduced. In one embodiment, as shown in
[0036] Then, please refer to
[0037] Next, a comprehensive description of the hair feature analysis method of this invention will be given. Please refer to
[0038] Here, we will provide a more detailed introduction to steps S640 and S650, starting with step S640. After the Artificial intelligence model 130 simultaneously detects multiple hair follicles in the scalp area and calculates the width of multiple hairs from the preprocessed images, the analysis module 140 receives the information output by the Artificial intelligence model 130. Based on this information, the analysis module 140 segments the hair, with the middle segment of each hair being selected as the basis for width calculation. Then, by analyzing the selected hair segments, the number of pixels corresponding to the hair width is determined. Subsequently, the analysis module 140 converts the pixel count representing hair width into a real-world measurement scale (such as micrometers) to provide accurate hair width. In addition, the analysis module 140 also calculates the fine hair density ratio, which is the ratio of fine hair (i.e., hair with a diameter smaller than a certain threshold) to the total number of hairs. This indicator can help evaluate hair strength and identify potential hair thinning or breakage problems. Furthermore, the analysis module 140 also calculates the total hair density ratio, which is the ratio of the total number of hair follicles in the examined scalp area to the area. This parameter helps identify the degree of hair loss or thinning.
[0039] Once the analysis module 140 calculates these hair features, the hair feature analysis system 100 proceeds to step S650, presenting the results to the user. In step S650, the display device 150 displays the analyzed images and calculated hair features, providing the user with a visualized result of the hair analysis (as shown in
[0040] Please refer to
[0041]
[0042] In the above embodiments, the Artificial intelligence model 130 uses the HTC model and the enhanced HTC model as examples, but the Artificial intelligence model 130 can also use the R-CNN model. When the Artificial intelligence model 130 is an R-CNN model, the first step is to generate potential bounding boxes or region proposals for the follicles in the input image. Selective search algorithms or other region proposal algorithms can be used, and regions of interest are determined based on color, texture, and size. Then, for each region of interest, the R-CNN model applies a pre-trained convolutional neural network (CNN) to extract meaningful features. The CNN helps identify patterns and features of follicles and hair, enabling the model to accurately identify and segment them. After feature extraction, a classifier is applied to each region of interest to determine whether it contains a follicle. The classifier, for example, could be a support vector machine, which uses the features extracted in the previous step to make decisions. Next, bounding box regression is performed in parallel with the classification step, optimizing the coordinates of the proposed bounding boxes to better frame the follicles, thereby improving the accuracy of follicle detection and segmentation. Afterward, the R-CNN model can pass the output data to the analysis model 140 for subsequent processing.
[0043] In addition, the Artificial intelligence model 130 can also use variants of other R-CNN models, such as the Cascade R-CNN model or the Mask R-CNN model. The Cascade R-CNN model can refer to the paper Cascade R-CNN: High Quality Object Detection and Instance Segmentation (arxiv.org/abs/1906.09756), and the Mask R-CNN model can refer to the paper Mask R-CNN (arxiv.org/abs/1703.06870)
[0044] Although the invention has been disclosed and illustrated with reference to particular embodiments, the principles involved are susceptible for use in numerous other embodiments that will be apparent to persons skilled in the art. This invention is, therefore, to be limited only as indicated by the scope of the appended claims