LUMINANCE CHANGING IMAGE PROCESSING WITH COLOR CONSTANCY

20170352137 · 2017-12-07

    Inventors

    Cpc classification

    International classification

    Abstract

    For obtaining good quality luminance dynamic range conversion, we describe an image color processing apparatus (200) arranged to transform an input color (R,G,B) of a pixel of an input image (Im_R2) having a first luminance dynamic range into an output color (Rs, Gs, Bs) of a pixel of an output image (Im_res) having a second luminance dynamic range, which first and second dynamic ranges differ in extent by at least a multiplicative factor 1.5, comprising: a maximum calculation unit (201) arranged to calculate the maximum (M) of at least three components of the input color; a brightness mapper (202) arranged to apply a function (F) to the maximum, yielding an output value (F(M)), whereby the function is predetermined having a constraint that the output value for the highest value of the maximum (M) cannot be higher than 1.0; a scaling parameter calculator (203) arranged to calculate a scaling parameter (a) being equal to the output value F(M)) divided by the maximum (M); and a multiplier (204) arranged to multiply the three color components of the input color (R,G,B) by the scaling parameter (a), yielding the color components of the output color, wherein the color processing apparatus (200) comprises at least one component multiplier (303) arranged to multiply a component (B) of the input color with a weight (wB) being a real number yielding a scaled component (Bw) prior to input of that component in the maximum calculation unit (201).

    Claims

    1. An image color processing apparatus arranged to transform an input color defined by a red, green and blue color component of a pixel of an input image having a first luminance dynamic range into an output color of a pixel of an output image having a second luminance dynamic range, which first and second dynamic ranges differ in extent by at least a multiplicative factor 1.5, comprising: a unit arranged to apply respective weighted or non-weighted functions (FNLR, FNLG, FNLB) to each of the red, green and blue color components of the input image resulting in modified red, green and blue color components (NR, NG, NB, wR*R, wG*B, Bw), wherein each of the functions may be one of a non-linear function, a linear scaling function multiplying a component (B) of the input color by a weight, or a unity function of an input component or combinations of the input components, and wherein at least one modified component (NG, NG, NB) represents one of the red, green and blue color components of the input image weighted by a non-unity real-numbered value, a maximum calculation unit arranged to calculate the maximum of at least the three modified components; a brightness mapper arranged to apply a function to the maximum, yielding an output value, whereby the function is predetermined having a constraint that the output value for the highest value of the maximum cannot be higher than 1.0; a scaling parameter calculator arranged to calculate a scaling parameter being equal to the output value divided by the maximum; and a multiplier arranged to multiply the red, green and blue color components of the input color by the scaling parameter, yielding the color components of the output color.

    2. An image color processing apparatus as claimed in claim 1 in which three weights are obtained from a data source associated with the input image.

    3. An image color processing apparatus as claimed in claim 1 comprising a luminance calculation unit arranged to calculate from the red, green and blue color components a luminance or luma as a fourth component of the input color, and comprising a luminance multiplier arranged to multiply the luminance or luma with a luminance weight, yielding an output result which is input as a fourth input to the maximum calculation unit.

    4. An image color processing apparatus as claimed in claim 1 in which one of the weights is set to 1.0.

    5. An image color processing apparatus as dependent on claim 3 in which at least one weight for the red, green and blue color components is below 0.5, and the luminance weight is 1.0, at least for one to be processed image of a set of images.

    6. An image color processing apparatus as claimed in claim 1, comprising at least one non-linear function application unit arranged to apply a non-linear function to at least one of the red, green and blue color components, and wherein the maximum calculation unit has as input besides the result of applying the non-linear function to the color component, at least two other color components which contain color information of the two of the red, green and blue components which were not selected for being processed by the at least one non-linear function application unit.

    7. An image color processing apparatus as claimed in claim 1 containing a color analysis unit arranged to analyze the input color, and determine therefrom the weights of at least the red, green and blue color components.

    8. An image color processing apparatus as claimed in claim 1 containing a color analysis unit arranged to analyze the input color, and determine therefrom the functional shape of at least one non-linear function of the functions.

    9. A method of image color processing to transform an input color defined by a red, green and blue color component of a pixel of an input image having a first luminance dynamic range into an output color of a pixel of an output image having a second luminance dynamic range, which first and second dynamic ranges differ in extent by at least a multiplicative factor 1.5, comprising: applying respective weighted or non-weighted functions (FNLR, FNLG, FNLB) to each of the red, green and blue color components of the input image resulting in modified red, green and blue color components (NR, NG, NB, wR*R, wG*B, Bw), wherein each of the functions may be one of a non-linear function, a linear scaling function multiplying a component (B) of the input color by a weight, or the unity function of an input component or combinations of the input components and wherein at least one modified component (NR, NG, NB) represents one of the red, green and blue color components of the input image weighted by a non-unit real-numbered value, calculating the maximum of at least the three modified components; applying a function to the maximum, yielding an output value, whereby the function is predetermined having a constraint that the output value for the highest value of the maximum cannot be higher than 1.0; calculating a scaling parameter being equal to the output value divided by the maximum; and multiplying the red, green and blue color components of the input color by the scaling parameter, yielding the color components of the output color.

    10. A method of image color processing as claimed in claim 9, in which the maximum is calculated from in addition to three red, green and blue color components also a luminance or luma scaled with a luminance weight.

    11. A method of image color processing as claimed in claim 9, in which the maximum calculation has at least one input which is a non-linear transformation of at least one of the red, green and blue components.

    12. A method of image color processing as claimed in claim 1, in which at least one of the weights is determined based on an analysis of the pixel color.

    13. A computer program product comprising code codifying the steps of claim 1, thereby upon running enabling a processor to implement that method.

    14. An image signal comprising an encoding of colors of a matrix of pixels, and in addition thereto as encoded metadata at least one weight defined to have as a meaning the use as weight for at least one of the at least three components of the input color being input for the maximum calculation in one of the above apparatuses or methods.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0038] These and other aspects of any variant of the method and apparatus according to the invention will be apparent from and elucidated with reference to the implementations and embodiments described hereinafter, and with reference to the accompanying drawings, which drawings serve merely as non-limiting specific illustrations exemplifying the more general concept, and in which dashes are used to indicate that a component is optional, non-dashed components not necessarily being essential. Dashes can also be used for indicating that elements, which are explained to be essential, are hidden in the interior of an object, or for intangible things such as e.g. selections of objects/regions, indications of value levels in charts, etc.

    [0039] In the drawings:

    [0040] FIG. 1 schematically illustrates our method to encode at least two gradings of different luminance dynamic range, based on encoding and transmitting, typically via legacy video communication technology, the required information as a set of images of one of the gradings, and data to be able to reconstruct at a receiving side the functions to be used to color map the first set of images to a second set being the other grading;

    [0041] FIG. 2 schematically illustrates our basic brightness mapping technology for dynamic range conversion, as we published it in WO2014/056679;

    [0042] FIG. 3 schematically illustrates simpler variants of our present invention, in which at least some of the RGB color components are weighed with scale factors typically smaller than or equal to one before being input in the maximum calculation unit, and a luminance input may also be present;

    [0043] FIG. 4 schematically illustrates a more complex embodiments, in which other components are present for delivering additional input to the maximum calculation unit, such as units to non-linearly map the color components (401, . . . ), a unit to calculate an arbitrary linear or non-linear combination of the color components (404), and a color analysis unit to set the weights;

    [0044] FIG. 5 schematically illustrates how a selection of some maximum values leads, via the predetermined mapping function 205 by the grader, to multiplicative factors (a) for ultimately doing the color processing; and

    [0045] FIG. 6 shows the same principle of the color transformation based on the maximal one determination of the input color components, but in a non-linear or gamma domain.

    DETAILED DESCRIPTION OF THE DRAWINGS

    [0046] FIG. 3 shows an example of how our present invention can be embodied. Linear red, green and blue components of a pixel of an input image (R,G,B) are multiplied by multipliers (301,302,303) by available weights (wR,wG,wB), resulting in weighed inputs, e.g. weighed blue Bw=wB*B. There may also be a luminance calculation unit (306), which calculates the luminance as a1*R+a2*G+a3*B, with fixed constants a1,a2,a3 depending on the color representation system, e.g. P3, Rec. 709, Rec. 2020, etc. In practical apparatuses and methods there may be color processing before and after our presently described unit, e.g. conversion to different color basis, but that is not required to understand the present invention, and would only needlessly complicate the elucidation. If a luminance input is present, there may be a luminance multiplier 304 comprised, even if it multiplies by 1.0, but if it always multiples by 1.0 that component may be missing (but the video signal may still explicitly contain four weights, in which case hardware without a luminance multiplier can still correctly process even when ignoring the luminance weight rather than to set the luminance multiplier with it).

    [0047] A maximum calculation unit (201) then calculates which one of the inputs is the highest, which we call M (e.g. the green component, having a value of 180*4, if the word length of the components is e.g. 10 bit). Then a brightness mapper (202) applies a function to M, which function has been previously designed by a grader to make sure that the resultant image has a good look. This image may be e.g. an LDR grading to be rendered on displays of PB around 100 nit, calculated from a master HDR grading. At a receiving side, the master HDR images, and the data of the function, may e.g. be read from a memory product like a BD, or received as a television signal via an antenna, or read over the internet from some remote database, etc. Finally a scaling parameter calculator (203) calculates a=F(M)/M, and a multiplier (204) multiplies the RGB color components with this a, yielding output colors (Rs,Gs,Bs) for pixels in an output image, which may be formatted in an output image signal, e.g. according to a standard for communicating it from a STB to a TV, etc.

    [0048] FIG. 4 shows what is possible in more complex embodiments. Instead of merely reading weights, which e.g. come in synchronized with starts of shots of images of the movie, i.e. a little before the first image of the shot has to be processed, a color analysis unit (410) can calculate weights, whether they were already present and have to be overwritten at least for some situations, or have to be calculated on the fly (in the case that no weights are communicated, but one or more algorithms to derive them). In principle any analysis of the color situation can be done, typically simply looking at the color of the pixel itself, but also other colors of the image may be evaluated, e.g. of surrounding pixels to estimate if noise would be conspicuous, and also depending on what is required, such as PB of the required grading, viewing surround conditions, acceptable quality in view of price of the content, etc.

    [0049] Non-linear function application units (401, 402, 403) may be present to provide non-linear color components (NR, NG, NB) as input to the maximum calculation unit. This is advantageous e.g. if one wants to design a mapping function 205 which is differently sampled, e.g. on a logarithmic axes system, etc. A non-linear transformation unit may also be present at the output of the maximum calculation unit (i.e. between units 201 and 202), i.e. non-linearly transforming the maximum M, whether it was selected from linear and or non-linear color components as input. Non-linear functions can e.g. also be realized by making the weights functions of the color component instead of one or more fixed real numbers, e.g. wB=B−cB.

    [0050] A color component combination unit (404) may also be present. With this e.g. some other brightness estimate SBC can be calculated than the luminance, e.g. as b1*R+b2*G+b3*B. It may also combine linearly or non-linearly the non-linear components NR,NG,NB, or in fact calculating whatever non-linear function yielding a single real-valued parameter over the cube of possible input colors (which function may typically be embodied as one or more LUTs, which may have been optimized e.g. for particular classes of content, like which camera captured the content and potentially under which conditions, e.g. night versus day, what type the content is, e.g. nature movie versus cartoon, or graphics or content containing some graphics, like maybe a tutorial or the news, etc.).

    [0051] Finally, especially if the technology is embodied in an encoding side apparatus, as with all such embodiments, an metadata encoder 450 will collect all the parameters, such as all the weights, parameters defining the shapes of the non-linear functions, or the calculations of parameters, or the data of LUTs, or the algorithms classifying particular colorimetric properties of the to be processed image(s), etc., and after formatting this in a pre-agreed format, send this to a communication technology 451, e.g. a server connected to the internet for later supply to end customers, or a direct link to a customer, etc. The skilled person can understand how the present embodiments can be incorporated in various image-related technologies, like e.g. video supply systems, image or video processing software, image analysis or re-processing systems, etc.

    [0052] FIG. 6 shows how the same principle can be applied in non-linear RGB representation, typically classical gamma R′G′B′ versions, being approximately a square root of the linear light RGB colors, as e.g. prescribed by the opto-electronic transfer function (which defines the mapping between the linear color components, e.g. R, and the luma codes R′, and vice versa via the EOTF), e.g. according to Rec. 709 (note that apart from color non-linearities the color gamut shape stays the same for the chose red, green and blue primaries).

    [0053] In this mere elucidating example—we have shown a HDR-to-LDR color transformation example, but the skilled person can understand one can similarly design a LDR-to-HDR apparatus, e.g. in a receiver which gets LDR images and needs to derive HDR versions thereof for say a 5000 or 1000 nit display—we have an HDR input signal. We assumed that it was defined with a luma Y″ defined by the new highly non-linear EOTF suitable for HDR encoding as in SMPTE 2084 (so-called PQ curve), but of course this is just an option.

    [0054] Matrix calculator 601 converts this Y″CbCr representation of the pixel color(s) to a highly non-linear (almost logarithmic) R″, G″, B″ representation. Non-linear function calculation unit 602 applies the fixed non-linear function to transform those components to the classical luma ones R′G′B′, i.e. defined according to typically e.g. Rec. 709 function defined as:


    R′=4.5*R if R<0.0018 or =1.099*power(R;0.45)−0.999 if R>0.018

    [0055] And the same equations for G and B, when those are defined starting from linear RGB components, but now one will start from the PQ components R″G″B″, which will typically be done by calculating LUTs once a priori.

    [0056] In this circuit we have added a luminance calculation unit 603, because if one calculates luminance in the non-linear space, there is some non constant luminance issue to a certain degree. What this unit does is calculate via the linear domain, i.e.:

    [0057] Y′=power([CR*R′̂gam+CG*G′̂gam+CB*B′̂gam]; 1/gam), in which gam equals e.g. 2.0, ̂indicates the power operation, and CR, CG and CB are the known component weights for luminance calculations, which can uniquely be calculated colorimetrically if one knows the chromaticities of the RGB primaries and the white point. So in this manner one gets a realistic value of the pixel luma corresponding to its actual luminance.

    [0058] Maximum calculation unit 604 is again any of the weighed component maximum calculation embodiments our invention allows, and scale factor calculation unit 202203 comprises the transformation of the input brightness correlate V′ from the maximum to a scale factor for the multiplicative processing, i.e. comprises what units 202 and 203 do. Finally multipliers 605, 606, and 607 realize via scale factor a the color transformation to the output color (Rs,GS,Bs), which is in this embodiment matrixed again to Y′CbCr by color matrixer 608, but this is now in the gamma domain, i.e. Y′ code defined according to e.g. typically the Rec. 709 EOTF.

    [0059] The algorithmic components disclosed in this text may (entirely or in part) be realized in practice as hardware (e.g. parts of an application specific IC) or as software running on a special digital signal processor, or a generic processor, etc. They may be semi-automatic in a sense that at least some user input may be/have been (e.g. in factory, or consumer input, or other human input) present.

    [0060] It should be understandable to the skilled person from our presentation which components may be optional improvements and can be realized in combination with other components, and how (optional) steps of methods correspond to respective means of apparatuses, and vice versa. The fact that some components are disclosed in the invention in a certain relationship (e.g. in a single figure in a certain configuration) doesn't mean that other configurations are not possible as embodiments under the same inventive thinking as disclosed for patenting herein. Also, the fact that for pragmatic reasons only a limited spectrum of examples has been described, doesn't mean that other variants cannot fall under the scope of the claims. In fact, the components of the invention can be embodied in different variants along any use chain, e.g. all variants of a creation side like an encoder may be similar as or correspond to corresponding apparatuses at a consumption side of a decomposed system, e.g. a decoder and vice versa. Several components of the embodiments may be encoded as specific signal data in a signal for transmission, or further use such as coordination, in any transmission technology between encoder and decoder, etc. The word “apparatus” in this application is used in its broadest sense, namely a group of means allowing the realization of a particular objective, and can hence e.g. be (a small part of) an IC, or a dedicated appliance (such as an appliance with a display), or part of a networked system, etc. “Arrangement” or “system” is also intended to be used in the broadest sense, so it may comprise inter alia a single physical, purchasable apparatus, a part of an apparatus, a collection of (parts of) cooperating apparatuses, etc.

    [0061] The computer program product denotation should be understood to encompass any physical realization of a collection of commands enabling a generic or special purpose processor, after a series of loading steps (which may include intermediate conversion steps, such as translation to an intermediate language, and a final processor language) to enter the commands into the processor, to execute any of the characteristic functions of an invention. In particular, the computer program product may be realized as data on a carrier such as e.g. a disk or tape, data present in a memory, data traveling via a network connection wired or wireless-, or program code on paper. Apart from program code, characteristic data required for the program may also be embodied as a computer program product. Such data may be (partially) supplied in any way.

    [0062] The invention or any data usable according to any philosophy of the present embodiments like video data, may also be embodied as signals on data carriers, which may be removable memories like optical disks, flash memories, removable hard disks, portable devices writeable via wireless means, etc.

    [0063] Some of the steps required for the operation of any presented method may be already present in the functionality of the processor or any apparatus embodiments of the invention instead of described in the computer program product or any unit, apparatus or method described herein (with specifics of the invention embodiments), such as data input and output steps, well-known typically incorporated processing steps such as standard display driving, etc. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention. Where the skilled person can easily realize a mapping of the presented examples to other regions of the claims, we have for conciseness not mentioned all these options in-depth. Apart from combinations of elements of the invention as combined in the claims, other combinations of the elements are possible. Any combination of elements can be realized in a single dedicated element.

    [0064] Any reference sign between parentheses in the claim is not intended for limiting the claim, nor is any particular symbol in the drawings. The word “comprising” does not exclude the presence of elements or aspects not listed in a claim. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.