System and method for color matching
10571336 ยท 2020-02-25
Inventors
- Atima Lui (Brooklyn, NY, US)
- Nyalia Lui (Indianapolis, IN, US)
- Mahmoud Afifi (Toronto, CA)
- Ariadne Bazigos (Brooklyn, NY, US)
Cpc classification
H04N1/6005
ELECTRICITY
G01J3/465
PHYSICS
G01J3/526
PHYSICS
G01J3/462
PHYSICS
H04N1/6086
ELECTRICITY
International classification
Abstract
A system for analyzing and processing user input and providing a result based on a predetermined set of color identifiers, the system comprising a first user input, wherein the first user input comprises of one or more digital images, a second user input, wherein the second user input comprises of responses to queries, a white balancing method for removing color casts from the first user input to create a final corrected image of the first user input, a first database for storing a predetermined set of color identifiers, a second database for storing product profiles, and a processor for analyzing the final corrected image of the first user input and the second user input collectively, comparing the final corrected image of the first user input and the second user input collectively to the predetermined set of color identifiers, and providing a color output.
Claims
1. A system for analyzing and processing user input and providing a result based on a predetermined set of color identifiers, the system comprising: a first user input, wherein the first user input comprises of one or more digital images; a second user input, wherein the second user input comprises of responses to queries; a white balancing method for removing color casts from the first user input to create a final corrected image of the first user input; a first database for storing a predetermined set of color identifiers; a second database for storing product profiles; and a processor for analyzing the final corrected image of the first user input and the second user input collectively, comparing the final corrected image of the first user input and the second user input collectively to the predetermined set of color identifiers, providing a color output, and displaying product suggestions based on the color output.
2. The system of claim 1, wherein the white balancing method comprises: receiving the first user input, wherein the first user input is converted to the YCbCr color space, said YCbCr color space comprising of a Y channel, and a Cb and a Cr channel; creating an affine color correction matrix by extracting a color feature vector from the Cb and Cr channels of the first user input, locating similar color feature vectors from a set of training samples and applying a local regression based on the training samples with similar color feature vectors; applying the affine color correction matrix to the Cb and Cr channels of the first user input to remove color cast from the first user input to produce an initial corrected image of the first user input; applying contrast stretching to the Y channel of the initial corrected image of the first user input; and blending the Y channel of the first user input and the result of the contrast stretching of the initial corrected image of the first user input to produce a final corrected image of the first user input.
3. The system of claim 2, wherein the color output comprises of one or more color identifiers.
4. The system of claim 3, wherein the first user input comprises of a digital image of a user and the processor further executes a skin detection algorithm for locating the areas on the final corrected image of the first user input that contain a user's skin.
5. A system for analyzing and processing user input and providing a result based on a predetermined set of color identifiers, comprising: a user input, said user input consisting essentially of one or more digital images; a white balancing method for removing color casts from the user input; a first database for storing a predetermined set of color identifiers; a second database for storing product profiles; and a processor for analyzing the user input, comparing the user input to the predetermined set of color identifiers, providing a color output, and displaying product suggestions based on the color output.
6. The system of claim 5, wherein the white balancing method comprises: receiving the user input, wherein the user input is converted to the YCbCr color space, said YCbCr color space comprising of a Y channel, and a Cb and a Cr channel; creating an affine color correction matrix by extracting a color feature vector from the Cb and Cr channels of the user input, locating similar color feature vectors from a set of training samples and applying a local regression based on the training samples with similar color feature vectors; applying the affine color correction matrix to the Cb and Cr channels of the user input to remove color cast from the user input to produce an initial corrected image of the user input; applying contrast stretching to the Y channel of the initial corrected image of the user input; and blending the Y channel of the user input and the result of the contrast stretching of the initial corrected image of the user input to produce a final corrected image of the user input.
7. The system of claim 6, wherein the color output comprises of one or more color identifiers.
8. The system of claim 7, wherein the user input comprises of a digital image of a user and the processor further executes a skin detection algorithm for locating the areas in the final corrected image of the user input that contain a user's skin.
9. A method of analyzing and processing user input and providing a result based on a predetermined set of color identifiers, the method comprising: receiving a first user input, wherein the first user input comprises of one or more digital images of a user; using a white balancing method, removing color casts from the first user input to produce a final corrected image of the first user input; using a skin detection algorithm, locating the areas on the final corrected image of the first user input that contain a user's skin; comparing the final corrected image of the first user input to a predetermined set of color identifiers stored in a database; producing a color output; and displaying product suggestions based on the color output.
10. The method of claim 9, wherein the white balancing method comprises: selecting the user input, wherein the user input is converted to the YCbCr color space, said YCbCr color space comprising of a Y channel, and a Cb and a Cr channel; creating an affine color correction matrix by extracting a color feature vector from the Cb and Cr channels of the user input, locating similar color feature vectors from a set of training samples and applying a local regression based on the training samples with similar color feature vectors; applying the affine color correction matrix to the Cb and Cr channels of the user input to remove color cast from the user input to produce an initial corrected image of the user input; applying contrast stretching to the Y channel of the initial corrected image of the user input; and blending the Y channel of the user input and the result of the contrast stretching of the initial corrected image of the user input to produce a final corrected image of the user input.
11. The method of claim 10, wherein the color output comprises of one or more color identifiers.
12. The method of claim 11, further comprising receiving a second user input, wherein the second user input comprises of responses to queries, and comparing the final corrected image of the first user input and the second user input collectively to a predetermined set of color identifiers stored in a database.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawings) will be provided by the Office upon request and payment of the necessary fee.
(2) Embodiments of the present invention will be described by way of example only, and not limitation, with reference to the accompanying drawings in which:
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
DETAILED DESCRIPTION
(17) The present invention discloses a system and method of color matching, the system and method comprising broadly of color correction of an image and identification of specific colors in said color corrected image. Referring now to the figures, where similar reference characters denote similar elements throughout the figures, in
(18) In
(19) In an embodiment of the present invention, the user input 101 may comprise of a first user input 101a and a second user input 101b, the first user input 101a comprising of one or more digital images and the second user input 101b comprising of user responses to queries. The one or more digital images of the first user input 101a are modified to account for the effects of illumination via a white balancing method. The white balancing method includes the stages of illuminant estimation and color correction.
(20) By way of example only, we will discuss the white balancing method in relation to an image of a user's face, however, as with the invention as a whole, the white balancing method may be applied to any image in the sRGB color space. A close-up image of a user's face results in skin pixels occupying a considerable amount of the image where these pixels can be used as a cue to drive the color correction matrix. Skin tone results from a two-layer structure; The top layer is the epidermis which contains melanin and the inner layer is the dermis which contains hemoglobin. Different skin tones are the result of the varying densities of the pigments in these two layers. Based on these properties of skin colors, the skin colors can be clustered in the color space and used as guidance to estimate the color correction matrix.
(21) The white balancing method is directed to sRGB images where a majority of the image contains a user's skin tone. A processor 102 converts the sRGB face image into the YCbCr color space where Y represents the luma component and Cb and Cr represent the blue-difference and red-difference chroma components respectively. An affine color correction matrix is created by extracting a color feature vector from the CbCr channel of the image, locating similar color feature vectors from a set of training samples and applying a local regression based on the training samples with similar color feature vectors. The affine color correction matrix is applied to the CbCr channel of the image to remove color casts from the image to produce a color corrected image. Contrast stretching is applied to the Y channel of the corrected image, and the Y channel of the original image and the result of the contrast stretching of the corrected image are blended to produce a final corrected image.
(22) More specifically, the colors of the user input I.sub.input can be corrected by applying a 33 affine transformation matrix T to the Cb and Cr channels of the image, assuming that the luma of the user input I.sub.input and the corrected image are the same. During the training stage, T can be calculated by minimizing the following equation (Equation 1):
(23)
where q is an N1 homogeneous coordinate vector and I.sub.1 is a ground truth corrected image. T must contain a [0, 0, 1] at the last row, meaning 6 parameters should be estimated to represent the scale, rotation, and translation of the Cb and Cr components of the input image to correct its colors.
(24) Let D, represent the distribution of the Cb and Cr components of a face image, D can be represented by the following equation (Equation 2):
DN(,)
where, IR.sup.2 represents the mean of D and is the 22 covariance matrix of the Cb and Cr components of D. A compact representation of D can be extracted by using only the distribution parameters, namely and . Thus, a color feature vector v of any given image is created, such that v=[, vec()], where vec(.) denote the vectorization of a matrix.
(25) Assuming there are L training points, each one is represented by a color feature vector v.sub.t(i) IR.sup.6. T.sub.(i) denotes the associated parameters of the one-to-one affine transformation matrix, obtained by Equation 1, that could effectively correct the color casts of the training image represented by v.sub.t(i). The similar n color feature vectors to v.sub.input is represented by V.sub.t, where V.sub.t is an n6 matrixthe L2 distance is adopted as a similarity metric. The parameters of the color correction matrix for I.sub.input can be estimated by the following equation (Equation 3):
{circumflex over (T)}=V.sub.tW
where W is a 66 weighting matrix that can be computed in closed form:
W=(V.sub.t.sup.TV.sub.t).sup.1V.sub.t.sup.TT.sub.t
where T.sub.t is the n6 vectorized parameters of the color correction matrices associated to the n color feature vectors in the training data.
(26) The initial corrected image I.sub.wb* and the final white balanced image are generated by the following equations (Equation 4 and Equation 5, respectively):
[I.sub.wb*(Cb,Cr),q]=[I.sub.input(Cb),I.sub.input(Cr),q]T*
I.sub.wb(i)=(l)I.sub.wb*(i)+((I.sub.wb*(i)min(I.sub.wb*(i),.sub.1))/(max(I.sub.wb*(i),.sub.2)min(I.sub.wb*(i),.sub.1)))
where T* is the reconstructed affine transformation matrix from {circumflex over (T)}, min(.), max(.) compute the min and max values after excluding the lower and higher values based on the trimming thresholds .sub.1 and .sub.2, respectively, i={Y, Cb, Cr}, and is a hyperparameter for blending the image after applying the contrast stretching to the initial corrected image I.sub.wb* whose luma component is the same as I.sub.input.
(27) In order to predict the skin color of the given face image, the white balanced image I.sub.wb is first produced by the white balancing method and the face region of the image is then extracted using face detection technology. In an image, there are a number of factors that may affect the brightness of the skin pixels such as shadows, specularities, or occlusions. Therefore, a preferred embodiment of the present invention relies on a confident set of skin pixels having skin probabilities greater than the 0.85 quantile of the distribution of the skin probabilities of the face region. The skin probabilities for each pixel is determined by a method disclosed by Dr. Ciarn Conaire. Conaire's method discloses a non-parametric histogram-based model trained using manually annotated skin and non-skin pixels. An RGB histogram is created for skin pixels and another one for non-skin pixels. For a particular pixel color, the log likelihood of it being skin is log(H(R,G,B)/J(R,G,B)), where H is the skin histogram, and J is the non-skin histogram, where {R, G, B} represent the red, green, and blue channel of the sRGB images. For a new image, the log likelihood of each pixel is calculated and then the result is compared to the threshold values to decide whether it is skin or non-skin. While we are using face detection technology as an example, other technologies may be appropriately used where the image comprises of another body part such as the user's hand, foot, etc.
(28) The selected skin pixels may have different levels of brightness due to the shadows in the image. Dark pixels could be discarded in order to get rid of shadows; however, this may also remove pixels for a dark skin tone. To compromise, the skin pixels are first clustered into k clusters using K-means algorithm and sorted based on the brightness level of each cluster. Given k clusters representing the skin pixels and w which denote the weighted vectors of each cluster (i.e., the normalized number of pixels associated with each cluster), the darkest |k/2| clusters are discarded. Then, the initial skin tone sIR.sup.3 is given by the following equation (Equation 6):
(29)
where w is the normalized weights of the first kk/2 clusters and C.sub.j is the color triplet of the j.sup.th cluster. The global illumination of the face region is included in the calculations to compromise between considering dark skin tones and discarding the shadow pixels. Thus, the final skin tone is given by (Equation 7):
(30)
(31) where g is a transformation function that maps the YCbCr colors to the corresponding sRGB colors, Y is the median value of the luma channel of the face region (representing the global illumination of the face region), and s.sub.i is a channel of the YCbCr of the initial estimation of the skin tone, such that i={Y, Cb, Cr}.
(32) In an embodiment of the present invention, the first user input 101a and the second user input 101b are processed collectively to produce a color output 105. In this embodiment, the first user input 101a is modified for illumination conditions and processed to produce a color output; said color output being a user's current tone. The second user input 101b is analyzed to provide a range of color identifiers 108 representative of the user's current tone along with additional tones accounting for seasonal lightening and tanning. The range of color identifiers 108 comprises of between one and four values in the same row. See for example,
(33) Turning now to the remaining figures,
(34) In another embodiment of the present invention, the user input comprises only of a first user input 101a, in the form of digital images. The first user input 101a is modified for illumination conditions via the white balancing method and then processed to produce a color output 105. The color output 105 is compared to available products stored in a second database 106 to display suggested products to the user based on the produced color output 105.
(35) In one embodiment of the present invention the system and method uses the user's color output 105 to select and display products with color identifiers 108 which are complementary to a user's skin tone. Complementary color identifiers 108 are assigned to a product's profile in the second database 106 at the time that the product profile is input. Such assignment may be manual or may be automatic based on the system's ability to learn from past assignments.
(36) In another embodiment of the present invention the system and method is used on a commercial scale to match color swatches to predetermined color identifiers 108. In this embodiment, the user input comprises of a plurality of color swatches simultaneously uploaded for processing. Each color swatch is compared to the predetermined color identifiers 108 to produce a color output 105.