Method and device for processing an image signal of an image sensor for a vehicle
11252388 · 2022-02-15
Assignee
Inventors
Cpc classification
G06T1/0014
PHYSICS
H04N1/646
ELECTRICITY
International classification
Abstract
A method includes obtaining, from an image sensor of a vehicle, an image signal that represents a vector of a plurality of input measured color values; determining weighting values as a function of the input measured values using a determination rule; ascertaining model matrices from a plurality of stored model matrices as a function of the weighting values, each of the model matrices generated for possible input measured values using a training rule; generating a color reconstruction matrix using the ascertained model matrices and at least one of the determined weighting values according to a generating rule; and applying the generated color reconstruction matrix to the measured color value vector in order to produce an output color vector that represents a processed image signal.
Claims
1. A method for use with a vehicle, the method comprising: obtaining, from an image sensor of the vehicle, an image signal representing a measured color value vector having a plurality of input measured values; determining, via a determination device, in accordance with a determination rule, weighting values based on the input measured values; ascertaining, via an ascertaining device, based on the weighting values, model matrices from a plurality of stored model matrices that each is generated for possible input measured values in accordance with a training rule; generating, in accordance with a generating rule, a color reconstruction matrix based on the ascertained model matrices and the corresponding determined weighting values; applying the generated color reconstruction matrix to the measured color value vector, so as to provide an output color vector that represents a processed color image signal; wherein an aligning device and a sorting device are configured to sort ascertained model matrices and to align sorted, ascertained model matrices to corresponding ones of determined weighting values, so that the aligning device and the sorting device are configured to compare the ascertained model matrices and the determined weighting values to each other and to produce an association therebetween, and wherein the determination device is configured to output the weighting values and a selection signal for the ascertaining of the model matrices by selection, and wherein for a given color measurement vector, at least a majority of the model matrices need not be selected, so that an amount of data downstream from the ascertaining device is reduced.
2. The method of claim 1, wherein the measured color value vector relates to a first color space wherein the weighting values represent a nonlinear subdivision of the first color space and wherein the output color vector relates to a second color space different from the first color space.
3. The method of claim 1, wherein the determining of weighting values includes determining only one weighting value and/or only one model matrix for each input measured value.
4. The method of claim 1, wherein the generating of the color reconstruction matrix is performed by assigning the ascertained model matrices to corresponding, determined weighting values in view of the input measured values.
5. The method of claim 1, further comprising: generating, in accordance with the training rule, the stored model matrices for possible input measured values.
6. The method of claim 5, wherein the generated stored model matrices correspond to different respective ones of the possible input measured values.
7. The method of claim 1, wherein the ascertained model matrices are ascertained based on a correspondence of the respective matrix to a respective one of the weighting values that is not equal to zero.
8. The method of claim 1, wherein the image sensor includes optics, microlenses, a color filter array (CFA), light sensors, and an analog-to-digital converter (ADC).
9. The method of claim 8, wherein the optics, the microlenses, and the color filter array each functions as a spectral filter for light L, wherein the color filter array corresponds to a color pattern mounted on or in front of the light sensors, wherein the light sensors function as integrators with respect to a wavelength λ of the light L, and wherein the analog-to-digital converter functions as a quantizer.
10. A device for use with a vehicle, comprising: a processor configured to perform the following: obtaining, from an image sensor of the vehicle, an image signal representing a measured color value vector having a plurality of input measured values; determining, via a determination device, in accordance with a determination rule, weighting values based on the input measured values; ascertaining, via an ascertaining device, based on the weighting values, model matrices from a plurality of stored model matrices that each is generated for possible input measured values in accordance with a training rule; generating, in accordance with a generating rule, a color reconstruction matrix based on the ascertained model matrices and the corresponding determined weighting values; and applying the generated color reconstruction matrix to the measured color value vector, so as to provide an output color vector that represents a processed color image signal; wherein an aligning device and a sorting device are configured to sort ascertained model matrices and to align sorted, ascertained model matrices to corresponding ones of determined weighting values, so that the aligning device and the sorting device are configured to compare the ascertained model matrices and the determined weighting values to each other and to produce an association therebetween, and wherein the determination device is configured to output the weighting values and a selection signal for the ascertaining of the model matrices by selection, and wherein for a given color measurement vector, at least a majority of the model matrices need not be selected, so that an amount of data downstream from the ascertaining device is reduced.
11. The device of claim 10, wherein the image sensor includes optics, microlenses, a color filter array (CFA), light sensors, and an analog-to-digital converter (ADC).
12. The device of claim 11, wherein the optics, the microlenses, and the color filter array each functions as a spectral filter for light L, wherein the color filter array corresponds to a color pattern mounted on or in front of the light sensors, wherein the light sensors function as integrators with respect to a wavelength λ of the light L, and wherein the analog-to-digital converter functions as a quantizer.
13. A sensor system for use with a vehicle, comprising: an image sensor for the vehicle; and a device having a processor configured to perform the following: obtaining, from the image sensor, an image signal representing a measured color value vector having a plurality of input measured values; determining, via a determination device, in accordance with a determination rule, weighting values based on the input measured values; ascertaining, via an ascertaining device, based on the weighting values, model matrices from a plurality of stored model matrices that each is generated for possible input measured values in accordance with a training rule; generating, in accordance with a generating rule, a color reconstruction matrix based on the ascertained model matrices and the corresponding determined weighting values; and applying the generated color reconstruction matrix to the measured color value vector, so as to provide an output color vector that represents a processed color image signal; wherein an aligning device and a sorting device are configured to sort ascertained model matrices and to align sorted, ascertained model matrices to corresponding ones of determined weighting values, so that the aligning device and the sorting device are configured to compare the ascertained model matrices and the determined weighting values to each other and to produce an association therebetween, and wherein the determination device is configured to output the weighting values and a selection signal for the ascertaining of the model matrices by selection, and wherein for a given color measurement vector, at least a majority of the model matrices need not be selected, so that an amount of data downstream from the ascertaining device is reduced.
14. The sensor system of claim 13, wherein the image sensor includes optics, microlenses, a color filter array (CFA), light sensors, and an analog-to-digital converter (ADC).
15. The sensor system of claim 14, wherein the optics, the microlenses, and the color filter array each functions as a spectral filter for light L, wherein the color filter array corresponds to a color pattern mounted on or in front of the light sensors, wherein the light sensors function as integrators with respect to a wavelength λ of the light L, and wherein the analog-to-digital converter functions as a quantizer.
16. A non-transitory computer-readable medium, on which are stored instructions, which are executable by a processor, comprising: a program code arrangement having program code, which is for use with a vehicle, for performing the following: obtaining, from an image sensor of the vehicle, an image signal representing a measured color value vector having a plurality of input measured values; determining, via a determination device, in accordance with a determination rule, weighting values based on the input measured values; ascertaining, via an ascertaining device, based on the weighting values, model matrices from a plurality of stored model matrices that each is generated for possible input measured values in accordance with a training rule; generating, in accordance with a generating rule, a color reconstruction matrix based on the ascertained model matrices and the corresponding determined weighting values; and applying the generated color reconstruction matrix to the measured color value vector, so as to provide an output color vector that represents a processed color image signal; wherein an aligning device and a sorting device are configured to sort ascertained model matrices and to align sorted, ascertained model matrices to corresponding ones of determined weighting values, so that the aligning device and the sorting device are configured to compare the ascertained model matrices and the determined weighting values to each other and to produce an association therebetween, and wherein the determination device is configured to output the weighting values and a selection signal for the ascertaining of the model matrices by selection, and wherein for a given color measurement vector, at least a majority of the model matrices need not be selected, so that an amount of data downstream from the ascertaining device is reduced.
17. The computer-readable medium of claim 16, wherein the image sensor includes optics, microlenses, a color filter array (CFA), light sensors, and an analog-to-digital converter (ADC).
18. The computer-readable medium of claim 17, wherein the optics, the microlenses, and the color filter array each functions as a spectral filter for light L, wherein the color filter array corresponds to a color pattern mounted on or in front of the light sensors, wherein the light sensors function as integrators with respect to a wavelength λ of the light L, and wherein the analog-to-digital converter functions as a quantizer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
DETAILED DESCRIPTION
(4) In the following description of preferred example embodiments of the present invention, the same or similar reference numerals are used for the elements that are shown in the different figures and function similarly, in which case a repeated description of these elements is omitted.
(5)
(6) Sensor system 110 includes an image sensor 120 or an image sensor device 120, and a device 130. Image sensor 120 is configured to provide an image signal 129. More specifically, image sensor 120 is configured to provide image signal 129 in response to, and as a function of, light L. Image signal 129 represents a measured color value vector {right arrow over (C)} (color measurement vector) including a plurality of input measured values. The measured color value vector relates to a color space defined by image sensor 120. Device 130 is configured to process image signal 129. Image sensor 120 and device 130 are interconnected so as to be able to transmit data and/or signals. According to the example embodiment represented here, sensor system 110 also includes a storage device 140. Storage device 140 is connected to device 130 so as to be able to transmit data and/or signals.
(7) According to the example embodiment represented here, image sensor 120 includes, by way of example, and listed in order from a side of image sensor 120 facing the windshield 102, optics 121, microlenses 122, a color filter array 123 (CFA), light sensors 124, and an analog-to-digital converter 125 (ADC). In this connection, optics 121, microlenses 122, and color filter array 123 each functions as a spectral filter for light L. Color filter array 123 resembles or corresponds to a color pattern mounted on or in front of light sensors 124. Light sensors 124 act as integrators with respect to a wavelength λ of light L. Analog-to-digital converter 125 acts as a quantizer.
(8) Device 130 is configured to carry out image signal processing (ISP), including color reconstruction. In this context, device 130 is configured to generate an output color vector {right arrow over (H)} (for example, either optimized for human vision or optimized for computer vision), using image signal 129 from image sensor 120, the output color vector representing a processed image signal 139. In the following, device 130 is discussed in even more detail.
(9) In addition, for purposes of illustration, a first block 104 and a second block 106 are shown in
(10)
(11) Device 130 is configured to read in image signal 129 from the image sensor. In particular, the image signal 129 read in represents a measured color value vector, which relates to a first color space. Determination device 231 is configured to determine weighting values {α} as a function of the input measured values represented by image signal 129, using a determination rule. Determination device 231 is advantageously configured to determine, for each input measured value, a weighting value for each of the stored model matrices in storage device 140. In particular, determination device 231 is configured to determine weighting values {α} that represent a nonlinear subdivision of the first color space. In addition, determination device 231 is configured to output the weighting values {α} determined, as well as a selection signal 232.
(12) Ascertaining device 233 is configured to ascertain model matrices {M} from a plurality of stored model matrices as a function of weighting values {α}.
(13) In this connection, ascertaining device 233 is configured to ascertain model matrices {M} in view of selection signal 232. In addition, ascertaining device 233 is configured to read in the model matrices from storage device 140. Each of the stored model matrices is generated for possible input measured values, using a training rule. Ascertaining device 233 is also configured to output ascertained model matrices {M}. Model matrices {M} are part of a mathematical model and are optimized with respect to input values, that is, adapted to specific positions of the individual input values. Model matrices {M} are split up as a function of the range of values and/or different scanning points. Ascertaining device 233 is advantageously configured in such a manner, that for a given color measurement vector 129, most of model matrices 140 do not need to be selected, and consequently, the amount of data downstream from ascertaining device 233 is kept low.
(14) By cooperation, aligning device 234 and sorting device 235 are configured to sort ascertained model matrices {M} and to align sorted, ascertained model matrices {M} to the correct or corresponding, determined weighting values {α}. In other words, aligning device 234 and sorting device 235 are configured to compare ascertained model matrices {M} and determined weighting values {α} to each other and to produce an association between the same.
(15) Generating device 236 is configured to generate a color reconstruction matrix M in accordance with a generating rule, using ascertained model matrices {M} and determined weighting values {α}. Generating device 236 is also configured to output color reconstruction matrix M. For each of the ascertained model matrices {M}, generating device 236 is configured to use an assigned value of the determined weighting values {α}. Consequently, a weighting for ascertained model matrices {M} is obtained as a function of measured values of the image sensor. In this connection, a dimension of color reconstruction matrix M is a function of measured color value vector {right arrow over (C)} and output color vector {right arrow over (H)}. According to an example embodiment, aligning device 234 and sorting device 235 can be constructed as part of generating device 236.
(16) Application device 237 is configured to apply the color reconstruction matrix M generated to measured color value vector {right arrow over (C)}, in order to generate output color vector {right arrow over (H)}, which represents processed image signal 139. In this connection, application device 237 is configured, in particular, to generate an output color vector {right arrow over (H)} that relates to a second color space different from the first color space of measured color value vector C. In addition, application device 237 or device 130 is configured to output processed image signal 139 or to provide it for output. Colloquially, device 130 is run through once for each pixel of an entire image signal, thus, 2 million times in the case of a 2 megapixel image.
(17)
(18) In a reading-in step 310 in processing method 300, the image signal of the image sensor of the sensor system is read in. The image signal represents a measured color value vector including a plurality of input measured values. In a determination step 320, weighting values are subsequently determined as a function of the input measured values, using a determination rule. In an ascertaining step 330, model matrices are ascertained from a plurality of stored model matrices as a function of the weighting values. In this context, each of the model matrices is generated for possible input measured values, using a training rule.
(19) In a generating step 340, a color reconstruction matrix is subsequently generated in accordance with a generating rule, using the model matrices ascertained in ascertaining step 330 and at least one of the weighting values determined in determination step 320. In an application step 350, the color reconstruction matrix generated in generating step 340 is applied to the measured color value vector, in order to produce an output color vector that represents a processed image signal.
(20) According to an example embodiment, processing method 300 also includes a step 360 of generating the model matrices for possible input measured values, using the training rule. In particular, in generating step 360, each of the model matrices is generated for a separate, possible input measured value.
(21) With reference to the above-described figures, and in order to clarify example embodiments, in the following, backgrounds and bases of example embodiments are explained in general, in summary, in other words, and more specifically.
(22) First, a definition of human perception of color is given. According to the International Commission on Illumination (CIE=Commission Internationale de l'Éclairage), human beings perceive color in a manner that is described mathematically by the XYZ color space of the CIE, that is, the CIE standard valency system. This is based on the physiological fact that human beings have three types of light sensors: for red, green and blue. The specifications of these perceptual units are used, in order to generate a weighted integral over the light output spectra, as follows:
X=∫.sub.380.sup.780L(λ)
Y=∫.sub.380.sup.780L(λ)
Z=∫.sub.380.sup.780L(λ)
(23) The functions
(24) Human color perception can be summarized as follows:
(25)
(26) Therefore, the sensitivity curves given in the standard can be reconstructed using the valid reconstruction function ϕ(λ).
(27) Assuming that the spectra are sampled at the same sampling points λ.sub.s, the equations above can be simplified to:
X=∫.sub.380.sup.780Σ.sub.λ.sub.
(28) This is done analogously for Y and Z.
(29) The characteristics of a reconstruction function are used, which are as follows:
Φ(0)=1 (8)
Φ(n.Math.x.sub.s)=0 ∀n≠0 x.sub.s: sampling interval (9)
∫Φ(x)dx=1 (10)
(30) And in simplified form:
(31)
(32) Now, a definition of the color approximation, using color filter array sampling or CFA sampling, is provided. Each color image sensor attempts to imitate human behavior. That is why at least three partially independent color perception channels are installed in each color filter array. With knowledge of the color transmission curves of the CFA, the CFA response values can be extracted, using the above-mentioned mathematical expressions. This yields:
CFA.sub.1=∫.sub.380.sup.780L(λ)
CFA.sub.2=∫.sub.380.sup.780L(λ)
CFA.sub.3=∫.sub.380.sup.780L(λ)
(33) And, for example, for an RCG color sensor:
CFA.sub.1=R (18)
CFA.sub.2=C (19)
CFA.sub.3=G (20)
(34) The color sensation observed using a camera can be summarized as:
(35)
(36) The following is a more detailed explanation of the color correction matrix. Since it is normally assumed that the camera color is close to the color perceived by people, it is assumed that the color reconstruction can be accomplished by a linear function (the initial chromatic aberration is assumed to be small):
{right arrow over (H)}.sub.3×1=M.sub.3×3.Math.{right arrow over (C)}.sub.3×1+{right arrow over (O)}.sub.3×1 (22)
{right arrow over (H)}.sub.3×1=M.sub.3×3.Math.{right arrow over (C)}.sub.3×1 (23)
(37) {right arrow over (O)} is an offset correction and is usually not necessary.
(38) Assuming that out of a given number of light spectra, a number of at least N pairs of {right arrow over (H)} and {right arrow over (C)} is available, the color reconstruction problem can be formulated as follows:
({right arrow over (H)}.sub.1. . . {right arrow over (H)}.sub.N).sub.3×N=M.sub.3×3.Math.({right arrow over (C)}.sub.1. . . {right arrow over (C)}.sub.N).sub.3×N (24)
(39) In the following, what was presented above is formulated as a minimization problem. Assuming that M.sub.3×3 is known, but that the quantity {{right arrow over (H)}, {right arrow over (C)}} is unknown, the M that is the best fit can be found by rewriting the problem as a known minimization problem. First,
({right arrow over (H)}.sub.1. . . {right arrow over (H)}.sub.N).sub.3×N=M.sub.3×3.Math.({right arrow over (C)}.sub.1. . . {right arrow over (C)}.sub.N).sub.3×N (25)
(40) is split up into three equations:
(X.sub.1. . . X.sub.N).sub.1×N=M.sub.X.1×3.Math.({right arrow over (C)}.sub.1. . . {right arrow over (C)}.sub.N).sub.3×N (26)
(Y.sub.1. . . Y.sub.N).sub.1×N=M.sub.Y.1×3.Math.({right arrow over (C)}.sub.1. . . {right arrow over (C)}.sub.N).sub.3×N (27)
(Z.sub.1. . . Z.sub.N).sub.1×N=M.sub.Z.1×3.Math.({right arrow over (C)}.sub.1. . . {right arrow over (C)}.sub.N).sub.3×N (28)
(41) where each of the equations can also be written as follows:
(42)
(43) In shorter notation:
{right arrow over (H)}.sub.X=(X.sub.1. . . X.sub.N).sub.N×1.sup.T (32)
{right arrow over (H)}.sub.Y=(X.sub.1. . . X.sub.N).sub.N×1.sup.T (33)
{right arrow over (H)}.sub.Z=(X.sub.1. . . X.sub.N).sub.N×1.sup.T (34)
(44) And in the same manner for the sub-matrices M:
{right arrow over (M)}.sub.X=M.sub.X,3×1.sup.T (35)
{right arrow over (M)}.sub.Y=M.sub.Y,3×1.sup.T (36)
{right arrow over (M)}.sub.Z=M.sub.Z,3×1.sup.T (37)
(45) Finally, for the measurements:
(46)
(47) Equations (26) through (28) can be rewritten as:
C.Math.{right arrow over (M)}.sub.X={right arrow over (H)}.sub.X (39)
C.Math.{right arrow over (M)}.sub.Y={right arrow over (H)}.sub.Y (40)
C.Math.{right arrow over (M)}.sub.Z={right arrow over (Z)}.sub.Z (41)
(48) These three equations can be combined into one again, by concatenating the vectors, and by repeating the matrix C:
(49)
(50) To minimize this equation, for example, in terms of the least squares error in x, the known method of least squares and a minimization solution method, such as the method of conjugate gradients, can be used.
(51)
(52) If these terms are expanded, further simplifications are yielded, and it is discernible how matrix A can be calculated from the color measurements.
(53) In the following, the use of a plurality of color correction matrices is discussed. Assuming a color filter array having N.sub.C channels, each sensor generates a digital number DN∈[DN.sub.min . . . DN.sub.max]. In the case of current HDR sensors: DN.sub.min=0 and DN.sub.max=2.sup.24. For each individual measurement {right arrow over (C)}, there exists a matrix M({right arrow over (C)}), which allows human observation to be reconstructed. There is a compromise in metamers of the type, in which a human observer distinguishes two colors, for which the machine only has one observation (for example, due to quantification effects).
{right arrow over (H)}=M({right arrow over (C)}).Math.{right arrow over (C)} (46)
(54) Such an approach requires special storage space. The current approach is to sample M({right arrow over (C)}) with regard to {right arrow over (C)} and to reconstruct it, as it is described above (equation 5):
{right arrow over (H)}=Σ.sub.i(M({right arrow over (C)}.sub.i).Math.Φ({right arrow over (C)}.sub.i−{right arrow over (C)})).Math.{right arrow over (C)} (47)
(55) Using equidistant sampling at the positions S={0, ½ DN.sub.max, DN.sub.max}, and assuming N.sub.C=3 color channels, the total number of color correction matrices is obtained:
(56)
(57) Regarding the reconstruction function phi or ϕ, the following property can be emphasized:
Φ:.sup.3.fwdarw.
(50)
(58) In addition, ϕ is divisible and identical, which results from the fact that the sampling function itself is divisible and identical for each dimension:
Φ=Π.sub.i∈{1 . . . N.sub.
e.g., for RCCG=Φ.sub.R(⋅).Math.Φ.sub.C(⋅).Math.Φ.sub.G(⋅) (52)
(59) It can be used for ϕ, a linear interpolation.
(60)
(61) Alternatively, ϕ=rect( ) can be used, and an interpolation of nearest neighbors can be obtained, which, with regard to hardware expenditure, can be more effective computationally.
(62)
(63) With regard to implementation, the color reconstruction can be simplified to:
(64)
(65) In the case of a CFA having three channels and three sampling points of the color correction matrix, the transformations are:
S(1)=0 (58)
S(2)=D=½DN.sub.max (59)
S(3)=2D=DN.sub.max (60)
C(1)=R(pix) (61)
C(2)=C(pix) (62)
C(3)=G(pix) (63)
(66) This means that there are a number of α.sub.p({right arrow over (C)}) values, but that most of the α values are 0:
(67)
Σ.sub.α.sub.
Σ.sub.α.sub.
(68) Depending on the selection of ϕ, any other situations in which values of α are other than zero are possible.
(69) It should be mentioned that α.sub.p({right arrow over (C)}) is a function of {right arrow over (C)} and therefore is to be calculated for each occurring pixel in the color reconstruction operation.
(70) In the above examples, either one or eight matrices are computed as a function of computing capacity and hardware resources, in order to carry out the color reconstruction. The other matrices become zero, C.sub.≡.sub.
(71) Regarding the ascertainment of the matrix values, it is mentioned once more that there is a known number of pairs {{right arrow over (H)}, {right arrow over (C)}}. In addition, it is assumed that the sampling positions for the M are known, which allows the α values to be calculated for each {right arrow over (C)} in accordance with equation 57.
(72)
(73) In the last step, only a shorter notation has been introduced.
(74) Introducing N known pairs of {{right arrow over (H)}, {right arrow over (C)}} measurements yields:
({right arrow over (H)}.sub.1, . . . , {right arrow over (H)}.sub.M)=Σ.sub.α.sub.
(75) Introducing the matrix of weighted color measurements yields:
(76)
(77) And after splitting up the formulas into the components of {right arrow over (H)}, the result is:
{right arrow over (H)}.sub.X.sup.t=Σ.sub.α.sub.
{right arrow over (H)}.sub.Y.sup.t=Σ.sub.α.sub.
{right arrow over (H)}.sub.Z.sup.t=Σ.sub.α.sub.
(78) Using the transformation rule of matrices yields:
{right arrow over (H)}.sub.X=Σ.sub.α.sub.
{right arrow over (H)}.sub.Y=Σ.sub.α.sub.
{right arrow over (H)}.sub.Z=Σ.sub.α.sub.
(79) Combined, the following system of equations is formed:
(80)
(81) For the case of NC=3 color channels and ∥∥=3 sampling points per CFA channel, the following matrices are obtained for training N color spectra:
p.sub.max=27 (80)
A.sub.3N×3N.sub.
A.sub.3N×243.Math.x.sub.243×1=b.sub.3N×1 (82)
(82) Assuming minimization according to the method of least squares, the specific numbers result in this example:
A.sub.243×3N.sup.tA.sub.3n×243.Math.x.sub.243×1=A.sub.243×3N.sup.tb.sub.3N×1B.sub.243×243t.Math.x.sub.243×1=c.sub.243×1 (83)
(83) The first step is a large matrix multiplication, in particular, for the case in which N is large. For example, N=5.Math.10.sup.4 spectral values are to be optimized, which results in, for this calculation, a storage requirement of approximately 2.Math.300 megabytes. This calculation can be performed offline in an efficient manner on a typical PC having MATLAB or Python installed, since it fits in the RAM.
(84) The second step is a matrix optimization of a resulting matrix having a size of approximately 243×243.Math.32 bit≈1 megabyte for the optimization problem. This is also simple to handle, using a standard PC.
(85) Different values for the individual {right arrow over (C)}'s are produced as a function of the selection of the initial color filter arrays, which generate the {{right arrow over (C)}} measurements. The result of this can be that either more sampling points M.sub.p have to be introduced, and therefore, more matrices have to be generated, from which it can be chosen when the color is reconstructed; or that the position of the sampling points of the model matrices is to be changed, and therefore, in the case of each new, computed C.sub.α.sub.
(86) Optimization problems are normally calculated with floating point precision. However, since the final product is most likely computed in fixed point iterations, the entire optimization operation can also be used directly as a fixed point iteration. Thus, a loss in precision can be prevented if the ascertained values of the matrices are used, and it can be discerned how much fixed point precision is needed in order to obtain the desired result.
(87) In the case of color reconstruction hardware having the nonlinear characteristics described above, the hardware requirements could sometimes be overly high. For trilinear color interpolation between the 8 nearest matrices M.sub.p, there should be 8 parallel color correction matrices. However, in the case of the calculation of nearest neighbors, there is, by definition, only one color correction matrix, which should be ascertained for each occurring pixel. The latter is a standard problem, and this should be able to be solved highly efficiently by conventional methods of hardware logic generation.
(88) Therefore, for the case in which trilinear reconstruction is out of the question, it is proposed that the approach of nearest neighbors be used and the linear transformation be compensated for by introducing more than 27 sampling points for M.sub.p.
(89) However, for each received color value to be reconstructed, the α value of this color should be calculated. In the case of trilinear reconstruction, α will lie in the range of [0, 1], whereas it will only be either 0 or 1 for the nearest neighbors.
(90) The computing device for calculating the α values can be optimized with regard to speed, power, and gate number. It is estimated that using p.sub.max=128 matrices M.sub.p, and reconstruction utilizing nearest neighbors, sufficiently effective results can be attained; with trilinear interpolation, even excellent quality of results can be obtained.
(91) The method according to example embodiments can also be applied to reconstruction of other color spaces and color palettes. It has been assumed that human color perception is reproduced in the XYZ color space; however, using different color spaces and, in particular, color spaces having a separate chromaticity level, the human chromaticity impression is aksi able to be defined only by the chromaticity level. For the LUV color space of the CIE, this yields:
(92)
(93) This can be used in the explanations above, in order to obtain the target color space directly in the following manner:
(94)
(95) This allows the matrices to contract and renders possible a more rapid search for the correct sampling points and the number of sampling points. The matrix state can be improved by adding more sampling points, or by changing the position of the sampling points.
(96) If an example embodiment includes an “and/or” conjunction between a first feature and a second feature, then this is to be read such that, according to an example embodiment, the example embodiment includes both the first feature and the second feature, and according to another example embodiment, the example embodiment includes either only the first feature or only the second feature.