METHOD AND SYSTEM FOR DETERMINING A CHARACTERISTIC OF A KERATINOUS SURFACE AND METHOD AND SYSTEM FOR TREATING SAID KERATINOUS SURFACE

20220005189 · 2022-01-06

Assignee

Inventors

Cpc classification

International classification

Abstract

The present application is directed to a method and system for determining at least one physical and/or chemical characteristic of a keratinous surface of a user, the method comprising the steps of: —receiving data corresponding to at least one image of the keratinous surface, —processing the image by applying at least one machine learning model to said image, —returning at least one numerical value corresponding to a grade of the characteristic of the keratinous surface to be determined.

Claims

1: A method for determining at least one physical and/or chemical characteristic of a keratinous surface of a user, the method comprising the steps of: receiving data corresponding to at least one image of the keratinous surface, processing the image by applying at least one machine learning model to said image, returning at least one numerical value corresponding to a grade of the characteristic of the keratinous surface to be determined.

2: The method according to claim 1, characterized in that it comprising a pre-processing segmentation step, the machine learning model being applied to at least one identified segment.

3: The method according to claim 2, characterized in that the segmentation step is performed by contrast analysis.

4: The method according to claim 2, characterized in that the segmentation step is performed by applying a machine learning model to the image.

5: The method according to claim 1, characterized in that the image is an RGB image.

6: The method according to claim 1, characterized in that the image is a multispectral image comprising at least one spectral band chosen among the visible spectral band 380 nm to 700 nm, the UV band 300 to 380 nm and the IR band 700 to 1500 nm.

7: The method according to claim 1, characterized in that the image is an infrared image.

8: The method according to claim 1, characterized in that the method is repeated in order to determine a second characteristic of the keratinous surface, the method being preferably repeated using the same image data.

9: The method according to claim 1, characterized in that it comprises a subsequent step of recommending a cosmetic product based on at least some of the real-valued characteristic(s).

10: The method according to claim 1, characterized in that it comprises a subsequent step of determining a recommended cosmetic product composition based on at least some of the real-valued characteristic(s).

11: The method according to claim 10, characterized in that it comprises the subsequent step of making and dispensing a cosmetic product according to the recommended composition.

12: The method according to claim 1, characterized in that the keratinous surface pictured in the image is an area of the skin of a user.

13: The method according to claim 12, characterized in that the area of the skin pictured in the image is an area of the face of the user, preferably an area of a cheek and/or forehead of the user.

14: The method according to claim 12, characterized in that the area of the skin pictured in the image is an area of the scalp of the user.

15: The method according to claim 1, characterized in that the area of the skin pictured in the image comprises hairs.

16: The method according to claim 15, characterized in that the area pictured in the image is a hair-root region of the user.

17: The method according to claim 1, characterized in that the characteristic to be determined is chosen among hair tone, hair diameter, hair density, percentage of white hair, length color, root color, hair shine, scalp dryness, dandruff degree, hair colorimetry, hair spectrum, Eulamin level, Pheomelanin level, level of artificial dyes, moisture, level of cysteic acid, damage level, lift level, the machine learning model having been trained accordingly to evaluate said characteristics.

18: The method according to claim 1, characterized in that the machine learning model is a pre-trained Convolutional Neural Network.

19: A non-transitory computer storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform a method according to claim 1.

20: A system for determining at least one physical and/or chemical characteristic of a keratinous surface of a user, said system comprising an image sensor configured for acquiring an image of the keratinous surface and to transmit the image data to a processor, said processor being configured to process the image data according to a method of claim 1.

21: The system according to claim 20, characterized in that the system is at least partially integrated into a portable camera device.

22: The system according to claim 20, characterized in that the system comprises a camera device including the image sensor, said camera device being configured to transmit the image data wirelessly to a distant processing device.

23: The system for making a personalized cosmetic product, comprising a characteristic determination system according to claim 20 and a dispensing unit configured to deliver a cosmetic product according to characteristics obtained from the characteristic determination system.

Description

[0077] The subject of the present application will be better understood in view of the following detailed description made in reference with the attached drawings in which:

[0078] FIG. 1 generally illustrates the various sequenced steps for determining a user's natural hair tone according to the present invention.

[0079] FIG. 2 generally illustrates the training process involved in obtaining the machine learning models to be applied.

[0080] FIG. 3 shows the step pipeline for use of a colorimetric model.

[0081] FIG. 4 illustrates the various layers applied to the acquired image of the scalp.

[0082] Although the present application is particularly relevant to scalp and hair diagnostic and hair property measurement, it is not limited to such keratinous surface and may find application in more general skin surface characterization and measurement.

[0083] As mentioned above, a sector that faces many challenges in this digital era is hair care and hair coloration. Due to the complex nature of hair and the process of hair dyeing, accurate hair diagnostics is crucial in order to provide the clients with personalized hair care and coloration products.

[0084] Hair related characteristics being the most challenging to obtain in a reliable way, the following description is illustrated by hair measurements. However, as mentioned, the present method is not limited to hair measurement and may find other skin applications.

[0085] Currently at hair salons, the first necessary step before applying any hair coloring product or recommending a hair treatment is hair diagnosis. Among the important features that the hairdresser needs to estimate are the hair tone, white hair percentage, hair diameter and density.

[0086] The root of the hair is commonly the only region where we have access to hair that has not been altered by external factors, such as natural/artificial colorants. It is a region where we can measure natural hair color and white hair percentage as well as hair density.

[0087] The roots of hair (i.e. the first centimeter away from the scalp) present us with clean hair fiber portions that have not been subjected to color change due to hair dyeing or environmental conditions. Therefore, they are a measure of a person's baseline hair characteristics.

[0088] Images acquired in the roots region show not only hairs but also the scalp that can vary highly in color, oiliness, and dandruff content. In addition, the hair fibres themselves, particularly at the root portion, are translucent, resulting in color dependence on the scalp background, and have a certain natural variability both in color and thickness. This diagnosis is currently made manually by hairdressers, and despite their expertise and training they are not always capable of making accurate estimations of all of these features, especially in the non-standard lighting conditions of hair salons.

[0089] Establishing accurate hair diagnostics at roots is a significant challenge with dramatic impact on hair coloration, beauty personalization and clinical evaluation.

[0090] The method according to the present application will be illustrated in details for hair tone measurement (especially natural hair tone measurement), however it may be used more generally for determining other physical or chemical properties depending on the property for which the model has been trained.

[0091] Natural hair tone is the visual attribute of the hair lightness/darkness and is related to the hair melanin concentration. As such, it is bound by the color space of hair colors existing in nature. It is traditionally measured using a logarithmic scale starting at 1, for very dark hair, to 10 for very light blond hair.

[0092] Since the hair tone is separated in 10 categories, one could address this problem as a classification problem. However, color experts have assessed tones with a precision of ¼ hair tone, so considering 1 to 10 classes would lose this accuracy in class labels. The problem has consequently been addressed as a regression problem, which assumes an order and continuity between the values of hair tone. Consequently, the method estimates and returns a real-valued hair tone.

[0093] Regarding the present method for valuing a hair property, important point is that hair tone scale is a perceptually linear scale notation that is appropriate to use with the present method returning a numerical value (prediction) and not a class per se (classification).

[0094] According to the present application, determining a user's natural hair tone comprises the following steps, the general sequence of which is shown on FIG. 1.

[0095] First, an image I of the user's scalp 10 at hair roots 11 is taken using a camera device 20 forming a diagnosis unit. The camera device 20 is preferably handheld. The image I corresponds to an object (scalp) area of about 12×17 mm. The image I is acquired by a 2000×1200 RGB CMOS sensor, but other kind of high resolution sensors may be used.

[0096] Before acquisition, a line is parted on the subject's head in order to have an unobstructed view on the hair roots 11. As a consequence, pictures have an axial symmetry around their middle, and are thus generally oriented, with scalp 10 in the middle and more hairs in top and bottom.

[0097] The method is used to estimate an unbiased hair tone value h. However, this hair tone h is not directly readable from the acquired images I since they present a mix between hair 11 and scalp 10. Moreover, because of hair transparency the scalp color can influence the visible hair pixels.

[0098] Hence, the image I data shall be processed with the specific objective of obtaining accurate hair tones h. According to the present invention, the image data is processed by applying a machine learning model to said image before returning a numerical real-valued hair tone.

[0099] The machine learning models hereafter described have been statistically trained using a training set comprising 11175 pictures taken on 407 subjects. Each picture has been assessed and labelled with an actual value of the desired property (Training process is generally shown on FIG. 2).

[0100] Hair tone estimation has been done using the following two approaches namely a colorimetric model and a deep learning model based on a convolutional neural network (CNN).

[0101] Colorimetric Model

[0102] The idea behind this approach is to transform the imaging device into a colorimetric device that is able to provide standard color values of hair and relate them with the perceptual attribute of hair tone. The complete pipeline of this approach is separated in the three following steps shown on FIG. 3.

[0103] First, the acquired image I is pre-process through a segmentation step S. More precisely, the hair pixels of interest are segmented from the image. Then, the RGB values of the segmented hair pixels are transformed into CIELAB color space and a fitted colorimetric model M estimates the hair tone from the median L* value of hair. As such, this colorimetric approach involves the use of two independent machine learnings models successively applied to the image data: first a machine learning model is applied to perform the color space conversion, then a second machine learning model is applied for hair tone estimation form media hair L*.

[0104] Hair segmentation S is the first pre-processing step. The main objective of this step is to transform the different RGB values of hair pixels to a single integral L*a*b* value. Therefore, instead of segmenting all hair pixels in the image, which can introduce outliers, it is preferable to segment robustly an adequate number of hair pixels avoiding specularities and other artifacts. The segmentation method is based on adaptive thresholding in order to vastly segment hair form scalp areas. Before thresholding, a Gaussian filter is applied to reduce the noise in the image. Finally, the resulting hair regions are ranked by size using connected components analysis in order to eliminate small detected areas that could be specularities or other artifacts falsely segmented as hair.

[0105] Then the RGB values are converted to CIELab color space by applying machine learning model. More specifically, the camera device shall be calibrated in order to transform the device-dependent RGB values to a device-independent color space. The CIELAB color space was selected for this transformation since it is relatively perceptually uniform and is widely used in the industry.

[0106] There are various methods in the literature that provide L*a*b* measurements of hair. For the calibration process, it is proposed to estimate the L*a*b* value r∈custom-character.sup.3 of each RGB pixel expressed as x∈custom-character.sup.3, using the following model:


{circumflex over (r)}(x)=C.sub.rϕ(x)

[0107] where ϕ(x) is a third degree polynomial (N=20) of the RGB pixel values x and C.sub.r are the corresponding coefficients, learned statistically by a machine learning model on a set of K training pairs. Those pairs were obtained by acquiring images of predefined color patches with measured L*a*b* values r.sub.i. For each patch x.sub.i acquired by the device we take the median RGB pixels in order to solve the regression problem:

[00001] arg min C r .Math. i = 1 K .Math. r ^ ( x i ) - r i .Math. 2

which is the closed-form equation of C.sub.r and ∥.∥.sup.2 denotes the L.sup.2 norm.

[0108] Once the hair pixels of interest are segmented, their RGB values are transformed into CIELAB space and the median L*a*b* values are kept. It has been noticed that the perceptual value that is most significantly varying between the different hair tones is lightness. In order to keep the model simple and avoid over-fitting on the data, the L* value is chosen as the most correlated value with the hair tone. Using the training set, a one-dimensional cubic smoothing spline {circumflex over (f)}(l) is fit between the M pairs of L* and hair tone values, denoted as l.sub.i and h.sub.i, respectively. These pairs of L* and hair tone values were computed by averaging on all the corresponding pictures of one participant. We automatically chose the number of the spline knots as the minimum number such that the following condition is met:

[00002] 1 M .Math. i = 1 M .Math. h ( x i ) - f ^ ( l i ) .Math. 2 σ 2

[0109] The smoothing factor σ was set to 0.35, so as to ignore small variations along hair tone range [1; 10].

[0110] To conclude, the hair tone is predicted from a single input image I. We segment a subset of robust hair pixels x in I. Then, after converting each pixel x into L*a*b* space using C.sub.rϕ(x) we compute the median L* value I(I) of these pixels. Finally, the hair tone is given by applying machine learning model as:


ĥ(I)={circumflex over (f)}(l(I)

[0111] Convolution Neural Network

[0112] The idea behind such an approach is to learn which patterns in the image are related to the hair tone, with no prior assumption on the patterns to extract. Contrary to the previous colorimetric model that is designed to focus on the lightness of hair pixels, this second one does not have prior knowledge of the problem. The optimization process is purely statistical and learns the patterns in an image I autonomously in order to estimate correct hair tone ĥ(I).

[0113] The basic idea behind CNN is to apply successive linear operations, such as 3×3 convolutions, with non-linear functions in-between them as well as operations reducing the spatial dimension of the image representation. This produces feature maps of the image, which is a representation into visual patterns. After successive regressions, such representation is reduced to a single real-valued output, that will be eventually mapped to labels h.sub.i in the training process.

[0114] All these operations are represented as successive layers, shown in FIG. 4 for the network we propose for the present hair tone application. Compared to state-of-the-art CNN models, it has less convolutional layers; this is motivated by the fact that the we are looking for simpler patterns such as edges and thin fibers, requiring fewer successive convolutions to be represented by the network.

[0115] The final dense layers are also reduced, since we have a one dimensional output ĥ(I). Moreover, residual connections were added to the architecture, in order to ease the optimization—direct links in back propagation—and extend model representation with simpler patterns.

[0116] For this purpose, we additioned the output of some layers to a latter layer, which is a light operation requiring no additional weights. In order to improve the speed, we use separable 2D convolutions, as a lighter alternative to regular 2D convolutions. They have less parameters to learn and require less computation. The idea is to first apply a 3×3 convolution channel by channel (also called depthwise), thus lighter than a full convolution, and then combine the resulting channels by a 1×1 full convolution. This variant of convolution will speed up the training process as well as the embedded prediction on device.

[0117] We now briefly explain the optimization process of such a model, in order to provide some intuition of how the neural network learns the relevant pattern for our problem. If we step back from the successive operations, we see that our estimated hair tone can be written as


ĥ(I)=g(I,Θ)

[0118] where g represents all successive operations of our neural network and Θ is a vector that regroups all the variable parameters, viz. convolutions and dense layers weights. In order to find appropriate values for Θ, we define a loss function custom-character(ĥ(I), h) that penalizes predictions ĥ(I) far from ground truth h. In our case, we minimize the mean square error custom-character(ĥ(I), h)=(ĥ(I)−h).sup.2 over all images I.sub.i and hair tones h.sub.i of our dataset:

[00003] min Θ .Math. i = 1 M ( h ^ ( I i ) , h i ) = min Θ .Math. i = 1 M ( g ( I , Θ ) - h i ) 2

[0119] Our strategy to solve this minimization is to update iteratively Θ using stochastic gradient descent:


Θ′=Θ+ϵcustom-character

[0120] where ϵ is the learning rate and ∇.sub.Θcustom-character is an approximation of the gradient on the full data set. Indeed, this complete gradient Σ.sub.i∇.sub.Θcustom-character(ĥ(I.sub.i), h.sub.i) would be too long to compute for each update. Using back-propagation, the gradient can be computed efficiently for a minibatch of size m images. Θ is thus updated at each mini-batch by considering a moving average of the gradient

[00004] = λ + ( 1 - λ ) 1 m .Math. i Θ ( h ^ ( I i ) , h i )

where λ is the momentum and B is the set of indices i of the mini-batch—custom-character⊂{1 . . . M}|custom-character|=m.

[0121] Each mini-batch update of the weights is thus based on a history of the gradient and not only on the mini-batch gradient Σ.sub.i∈custom-character∇.sub.Θcustom-character(ĥ(I.sub.i), h.sub.i), which presents high variations through mini-batches. In practice, we passed 800 times on all samples of our data set, with a learning rate of ϵ=10.sup.−4 and using a momentum of λ=0:9. At the end, in order to tune further the parameters Θ, we passed 20 additional times on our data set with a learning rate divided by 10. During each pass, each image Ii is randomly flipped horizontally and vertically in order to augment artificially our data set. For our successive layers and the optimization process we used the implementation of Keras based on the Tensorflow back-end.

[0122] Although illustrated by two machine learning models (colorimetric model & convolutional neural network model), other machine learning technics may be used, main aspect of the present application being the use of a statistical machine learning model in lieu of classical metrology using image analysis technique to directly extract the relevant parameter and actually performing a measurement.

[0123] The returned real-valued property may then be used as a parameter for further calculations or steps.

[0124] As a first possibility, the real-valued property may be used in a recommender, the method comprising a subsequent step of recommending a cosmetic product based on said real-valued characteristic.

[0125] As a second possibility, instead of recommending an existing product, the real-valued characteristic may be used to calculate or determine a composition of a cosmetic product based on a desired effect or result. More specifically, the method may comprise a step of calculation a proportion and/or amount of a component based on its alteration power of the rea-valued characteristic in order to achieve the desired result. For example, if a user wishes to dye its hair in order to reach a hair tone of 5 and its initial hair tone is 7, the method will comprise a step of calculating the amount of oxidant required to reach this result.

[0126] The determined composition may then be manufactured, on site or off site, and dispensed or delivered accordingly to the user.

[0127] As will be appreciated, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.

[0128] Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a solid state disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, a phase change memory storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

[0129] Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, e.g., an object oriented programming language such as Java, Smalltalk, C++ or the like, or a conventional procedural programming language, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). It is to be understood that the software for the computer systems of the present invention embodiments may be developed by one of ordinary skill in the computer arts based on the functional descriptions contained in the specification and flow charts illustrated in the drawings. Further, any references herein of software performing various functions generally refer to computer systems or processors performing those functions under software control.

[0130] The various functions of the computer systems may be distributed in any manner among any quantity of software modules or units, processing or computer systems and/or circuitry, where the computer or processing systems may be disposed locally or remotely of each other and communicate via any suitable communications medium (e.g., LAN, WAN, Intranet, Internet, hardwire, modem connection, wireless, etc.).

[0131] More precisely, the above detailed method is implemented into a system for determining at least one physical and/or chemical characteristic of a keratinous surface of a user, said system comprising an image sensor configured for acquiring an image of the targeted keratinous surface, the image data being transmitted to a processor configured to process the image data according to the above described method, namely by applying the machine learning model and return a real-valued estimate of the characteristic. Such a system forms a diagnosis unit.

[0132] The system is integrated into a portable camera device. In a first embodiment, the camera device comprises the image sensor and the image data are processed outside the camera device in a distant processing unit. Advantageously, the image data are transmitted wirelessly. In an alternate embodiment, the system is fully integrated into the portable device, meaning that the camera device also comprises the processing unit. This way, the camera device can be used as an autonomous and/or real-time diagnosis unit directly outputting the real-valued property.

[0133] The system may comprise additional components in order to, for example, form a system for making a personalized cosmetic product, in particular a hair product, namely a hair coloring product. To this end, the system comprises a diagnosis unit as described above and a dispensing unit configured to deliver a cosmetic product according to characteristics obtained from the diagnosis unit. For a detailed description of a dispensing unit, one may refer to previously cited document U.S. Pat. No. 9,316,580 the content of which is hereby fully integrated.

[0134] The system and method described in the present application thus improves the overall automation of the personalization process. For hair treatment, in particular hair dying treatment, the automation of the process is vital in order to ease the life of hairdressers and equip them with tools that guarantee precision, robustness and efficiency. Moreover, using such devices we can go beyond the human visual perception and offer an objective notation that does not vary depending on the hairdresser and the conditions—in others terms, a standardized notation.

[0135] The foregoing examples are illustrative of certain functionality of embodiments of the invention and are not intended to be limiting. Indeed, other functionality and other possible use cases will be apparent to the skilled artisan upon review of this disclosure.