Method for rendering color images
11527216 · 2022-12-13
Assignee
Inventors
Cpc classification
G09G2320/0666
PHYSICS
G09G2320/0242
PHYSICS
G09G3/344
PHYSICS
G09G2320/0209
PHYSICS
G09G2320/0214
PHYSICS
International classification
G09G3/20
PHYSICS
Abstract
A system for rendering color images on an electro-optic display when the electro-optic display has a color gamut with a limited palette of primary colors, and/or the gamut is poorly structured (i.e., not a spheroid or obloid). The system uses an iterative process to identify the best color for a given pixel from a palette that is modified to diffuse the color error over the entire electro-optic display. The system additionally accounts for variations in color that are caused by cross-talk between nearby pixels.
Claims
1. A method of rendering color images on an electro-optic display comprising a plurality of pixels, each pixel including an independently-controllable electrode, wherein each pixel can produce a plurality of colors, resulting in a color gamut derived from a palette of primary colors, the method comprising: a. receiving a plurality of input values, wherein each input value represents a color of a pixel of an image to be rendered; b. adding an error value to each input value after the first input value to produce a modified input value, wherein the error value is derived from at least one previously-processed input value; c. if the modified input value produced in step b is outside the color gamut, projecting the modified input value onto the color gamut to produce a projected modified input value; d. for each input value after the first input value, modifying the palette of primary colors based on an output value corresponding to at least one previously-processed pixel, thereby producing a modified palette; e. comparing the modified input value from step b or the projected modified input value from step c with the primary colors in the modified palette, selecting a primary color with the smallest error, and outputting the value of the primary color with the smallest error as the color value for the pixel corresponding to the input value being processed; f. calculating the difference between the modified input value or projected modified input value used in step e and the value of the primary color with the smallest error output from step e to derive an error value, and using at least a fraction of the derived error value as the error value input to step b for at least one later-processed input value; g. using the value of the primary color with the smallest error output from step e in step d for at least one later-processed input value; and h. providing instructions to the independently-controllable electrode to cause the pixel to display the primary color with the smallest error output.
2. The method of claim 1 further comprising displaying an image on a display device having the color gamut used in the method, wherein the image is displayed using primary colors from the modified palette.
3. The method of claim 1 wherein the projection in step c is effected along lines of constant brightness and hue in a linear RGB color space onto the nominal gamut.
4. The method of claim 1 wherein the comparison in step e is effected using a minimum Euclidean distance quantizer in a linear RGB space.
5. The method of claim 1 wherein the comparison in step e is effected using barycentric thresholding.
6. The method of claim 5 wherein the color gamut used in step c is that of the modified palette used in step e of the method.
7. The method of claim 1 wherein the plurality of input values are processed in an order corresponding to a raster scan of the pixels, and in step d the modification of the palette of primary colors is based on output values corresponding to a pixel in the previously-processed row which shares an edge with the pixel corresponding to the input value being processed, and a previously-processed pixel in the same row which shares an edge with the pixel corresponding to the input value being processed.
8. The method of claim 1 wherein step c is effected by computing the intersection of the projection with the surface of the gamut and step e is effected by (i) if the output of step b is outside the gamut, the triangle which encloses the aforementioned intersection is determined, the barycentric weights for each vertex of this triangle is determined, and the output from step e is the value of the triangle vertex having the largest barycentric weight; or (ii) if the output of step b is within the gamut, the output from step e is the value of the nearest primary calculated by Euclidean distance.
9. The method of claim 8 wherein the projection is effected so as to preserve the hue angle of the input to step c.
10. The method of claim 1 wherein step c is effected by computing the intersection of the projection with the surface of the gamut and step e is effected by (i) if the output of step b is outside the gamut, the triangle which encloses the aforementioned intersection is determined, the barycentric weights for each vertex of this triangle is determined, and the barycentric weights thus calculated are compared with the value of a blue-noise mask at the pixel location, the output from step e being the value of the color of the triangle vertex at which the cumulative sum of the barycentric weights exceeds the mask value; or (ii) if the output of step b is within the gamut, the output from step e is the value of the nearest primary calculated by Euclidean distance.
11. The method of claim 10 wherein the projection is effected so as to preserve the hue angle of the input to step c.
12. The method of claim 1 wherein step c is effected by computing the intersection of the projection with the surface of the gamut and step e is effected by (i) if the output of step b is outside the gamut, the triangle which encloses the aforementioned intersection is determined, the primary colors which lie on the convex hull are determined, and the output from step e is the value of the closest primary color lying on the convex hull calculated by Euclidian distance; or (ii) if the output of step b is within the gamut, the output from step e is the value of the nearest primary calculated by Euclidean distance.
13. The method of claim 12 wherein the projection is effected so as to preserve the hue angle of the input to step c.
14. The method of claim 1 further comprising: (i) identifying pixels of the display which fail to switch correctly, and the colors presented by such defective pixels; (ii) in the case of each defective pixel, outputting from step e the value of the color presented by the defective pixel; and (iii) in the case of each defective pixel, in step f calculating the difference between the modified or projected modified input value and the value of the color presented by the defective pixel.
15. The method of claim 1 wherein the color gamut used in step c is derived by: (1) measuring test patterns to derive information about cross-talk among adjacent primaries; (2) converting the measurements from step (1) to a blooming model that predicts the displayed color of arbitrary patterns of primary colors; (3) using the blooming model derived in step (2) to predict actual display colors of patterns that would normally be used to produce colors on the convex hull of the primary colors; and (4) describing a gamut surface realizable using the predictions made in step (3).
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
(2) As already mentioned,
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
DETAILED DESCRIPTION
(25) A preferred embodiment of the method of the invention is illustrated in
(26) As noted in the aforementioned Pappas paper, one well-known issue in model-based error diffusion is that the process can become unstable, because the input image is assumed to lie in the (theoretical) convex hull of the primaries (i.e. the color gamut), but the actual realizable gamut is likely smaller due to loss of gamut because of dot overlap. Therefore, the error diffusion algorithm may be trying to achieve colors which cannot actually be achieved in practice and the error continues to grow with each successive “correction”. It has been suggested that this problem be contained by clipping or otherwise limiting the error, but this leads to other errors.
(27) The present method suffers from the same problem. The ideal solution would be to have a better, non-convex estimate of the achievable gamut when performing gamut mapping of the source image, so that the error diffusion algorithm can always achieve its target color. It may be possible to approximate this from the model itself, or determine it empirically. However neither of the correction methods is perfect, and hence a gamut projection block (gamut projector 206) is included in preferred embodiments of the present method. This gamut projector 206 is similar to that proposed in the aforementioned application Ser. No. 15/592,515, but serves a different purpose; in the present method, the gamut projector is used to keep the error bounded, but in a more natural way than truncating the error, as in the prior art. Instead, the error modified image is continually clipped to the nominal gamut boundary.
(28) The gamut projector 206 is provided to deal with the possibility that, even though the input values x.sub.i,j are within the color gamut of the system, the modified inputs u.sub.i,j may not be, i.e., that the error correction introduced by the error filter 106 may take the modified inputs u.sub.i,j outside the color gamut of the system. In such a case, the quantization effected later in the method may produce unstable results since it is not be possible generate a proper error signal for a color value which lies outside the color gamut of the system. Although other ways of this problem can be envisioned, the only one which has been found to give stable results is to project the modified value u.sub.i,j on to the color gamut of the system before further processing. This projection can be done in numerous ways; for example, projection may be effected towards the neutral axis along constant lightness and hue, thus preserving chrominance and hue at the expense of saturation; in the L*a*b* color space this corresponds to moving radially inwardly towards the L* axis parallel to the a*b* plane, but in other color spaces will be less straightforward. In the presently preferred form of the present method, the projection is along lines of constant brightness and hue in a linear RGB color space on to the nominal gamut. (But see below regarding the need to modify this gamut in certain cases, such as use of barycentric thresholding.) Better and more rigorous projection methods are possible. Note that although it might at first appear that the error value e.sub.i,j (calculated as described below) should be calculated using the original modified input u.sub.i,j rather than the projected input (designated u′.sub.i,j in
(29) The modified input values are fed to a quantizer 208, which also receives a set of primaries; the quantizer 208 examines the primaries for the effect that choosing each would have on the error, and the quantizer chooses the primary with the least (by some metric) error if chosen. However, in the present method, the primaries fed to the quantizer 208 are not the natural primaries of the system, {P.sub.k}, but are an adjusted set of primaries, {P.sup.˜.sub.k}, which allow for the colors of at least some neighboring pixels, and their effect on the pixel being quantized by virtue of blooming or other inter-pixel interactions.
(30) The currently preferred embodiment of the method of the invention uses a standard Floyd-Steinberg error filter and processes pixels in raster order. Assuming, as is conventional, that the display is treated top-to-bottom and left-to-right, it is logical to use the above and left cardinal neighbors of pixel being considered to compute blooming or other inter-pixel effects, since these two neighboring pixels have already been determined. In this way, all modeled errors caused by adjacent pixels are accounted for since the right and below neighbor crosstalk is accounted for when those neighbors are visited. If the model only considers the above and left neighbors, the adjusted set of primaries must be a function of the states of those neighbors and the primary under consideration. The simplest approach is to assume that the blooming model is additive, i.e. that the color shift due to the left neighbor and the color shift due to the above neighbor are independent and additive. In this case, there are only “N choose 2” (equal to N*(N−1)/2) model parameters (color shifts) that need to be determined. For N=64 or less, these can be estimated from colorimetric measurements of checkerboard patterns of all these possible primary pairs by subtracting the ideal mixing law value from the measurement.
(31) To take a specific example, consider the case of a display having 32 primaries. If only the above and left neighbors are considered, for 32 primaries there are 496 possible adjacent sets of primaries for a given pixel. Since the model is linear, only these 496 color shifts need to be stored since the additive effect of both neighbors can be produced during run time without much overhead. So for example if the unadjusted primary set comprises (P1 . . . P32) and your current up, left neighbors are P4 and P7, the modified primaries (P.sup.˜.sub.1 . . . P.sup.˜.sub.32), the adjusted primaries fed to the quantizer are given by:
P.sup.˜.sub.1=P.sub.1+dP.sub.(1,4)+dP.sub.(1,7);
P.sup.˜.sub.32=P.sub.32+dP.sub.(32,4)+dP.sub.(32,7),
where dP.sub.(ij) are the empirically determined values in the color shift table.
(32) More complicated inter-pixel interaction models are of course possible, for example nonlinear models, models taking account of corner (diagonal) neighbor, or models using a non-causal neighborhood for which the color shift at each pixel is updated as more of its neighbors are known.
(33) The quantizer 208 compares the adjusted inputs u′.sub.i,j with the adjusted primaries {P.sup.˜.sub.k} and outputs the most appropriate primary y.sub.i,k to an output. Any appropriate method of selecting the appropriate primary may be used, for example a minimum Euclidean distance quantizer in a linear RGB space; this has the advantage of requiring less computing power than some alternative methods. Alternatively, the quantizer 208 may effect barycentric thresholding (choosing the primary associated with the largest barycentric coordinate), as described in the aforementioned application Ser. No. 15/592,515. It should be noted, however, that if barycentric thresholding is employed, the adjusted primaries {P.sup.˜.sub.k} must be supplied not only to the quantizer 208 but also to the gamut projector 206 (as indicated by the broken line in
(34) The y.sub.i,k output values from the quantizer 208 are fed not only to the output but also to a neighborhood buffer 210, where they are stored for use in generating adjusted primaries for later-processed pixels. The modified input u′.sub.i,j values and the output y.sub.i,j values are both supplied to a processor 212, which calculates:
e.sub.i,j=u′.sub.i,j−y.sub.i,j
and passes this error signal on to the error filter 106 in the same way as described above with reference to
(35) TB Method
(36) As indicated above, the TB variant of the present method may be summarized as follows: 1. Determine the convex hull of the device color gamut; 2. For a color (EMIC) outside the gamut convex hull: a. Project back onto the gamut boundary along some line; b. Compute the intersection of that line with the triangles which make up the surface of the gamut; c. Find the triangle which encloses the color and the associated barycentric weights; d. Determine the dithered color by the triangle vertex having the largest barycentric weight. 3. For a color (EMIC) inside the convex hull, determine the “nearest” primary color from the primaries, where “nearest” is calculated as a Euclidean distance in the color space, and use the nearest primary as the dithered color.
(37) A preferred method for implementing this three-step algorithm in a computationally-efficient, hardware-friendly will now be described, though by way of illustration only since numerous variations of the specific method described will readily be apparent to those skilled in the digital imaging art.
(38) As already noted, Step 1 of the algorithm is to determine whether the EMIC (hereinafter denoted u), is inside or outside the convex hull of the color gamut. For this purpose, consider a set of adjusted primaries PP.sub.k, which correspond to the set of nominal primaries P modified by a blooming model; as discussed above with reference to . It follows from simple geometry that the point u is outside the convex hull if
.Math.(u−v.sub.k.sup.1)<0,∀k (4)
where “.Math.” represents the (vector) dot product and wherein normal vectors “” are defined as pointing inwardly. Crucially, the vertices v.sub.k and normal vectors can be precomputed and stored ahead of time. Furthermore, Equation (4) can readily be computer calculated in a simple manner by
(39)
where “º” is the Hadamard (element-by-element) product.
(40) If u is found to be outside the convex hull, it is necessary to define the projection operator which projects u back on to the gamut surface. The preferred projection operator has already been defined by Equations (2) and (3) above. As previously noted, this projection line is that which connects u and a point on the achromatic axis which has the same lightness. The direction of this line is
d=u−V.sub.y (6)
so that the equation of the projection line can be written as
u=V.sub.y+(1−t)d (7)
where 0≤t≤1. Now, consider the k.sup.th triangle in the convex hull and express the location of some point x.sub.k within that triangle in terms of its edges e.sub.k.sup.1 and e.sub.k.sup.2
x.sub.k=v.sub.k.sup.1+e.sub.k.sup.1p.sub.k+e.sub.k.sup.2q.sub.k (8)
where e.sub.k.sup.1=v.sub.k.sup.1−v.sub.k.sup.2 and e.sub.k.sup.1=v.sub.k.sup.1−v.sub.k.sup.3 and p.sub.k, q.sub.k are barycentric coordinates. Thus, the representation of x.sub.k in barycentric coordinates (p.sub.k, q.sub.k) is
x.sub.k=v.sub.k.sup.1(1−p.sub.k−q.sub.k)+v.sub.k.sup.2p.sub.k+v.sub.k.sup.3q.sub.k (9)
(41) From the definitions of barycentric coordinates and the line length t, the line intercepts the k.sup.th triangle in the convex hull if and only if:
0≤t.sub.k≤1
p.sub.k≥0
q.sub.k≥0
p.sub.k+q.sub.k≤1 (10)
If a parameter L is defined as:
(42)
then the distance t.sub.k is simply given by
(43)
Thus, the parameter used in Equation (4) above to determine whether the EMIC is inside or outside the convex hull can also be used to determine the distance from the color to the triangle which is intercepted by the projection line.
(44) The barycentric coordinates are only slightly more difficult to compute. From simple geometry:
(45)
where
p.sub.k′=(u−v.sub.k.sup.1)×e.sub.k.sup.2
q.sub.k′=(u−v.sub.k.sup.1)×e.sub.k.sup.1 (14)
and “x” is the (vector) cross product.
(46) In summary, the computations necessary to implement the preferred form of the three-step algorithm previously described are: (a) Determine whether a color is inside or outside the convex hull using Equation (5); (b) If the color is outside the convex hull, determine on which triangle of the convex hull the color is to be projected by testing each of the k triangles forming the hull using Equations (10)-(14); (c) For the one triangle k=j where all of the Equations (10) are true, calculate the projection point u′ by:
u′=V.sub.y+(1−t.sub.j)d (15)
and its barycentric weights by:
α.sub.u=[1−p.sub.j−q.sub.j,p.sub.j,q.sub.j] (16)
These barycentric weights are then used for dithering, as previously described.
(47) If the opponent-like color space defined by Equation (1) is adopted, u consists of one luminance component and two chrominance components, u=[u.sub.L, u.sub.O1, u.sub.O2], and under the projection operation of Equation (16), d=[0, u.sub.O1, u.sub.O2], since the projection is effected directly towards the achromatic axis.
t.sub.k=(u−v.sub.k.sup.1)=[t.sub.k.sup.1,t.sub.k.sup.2,t.sub.k.sup.3],
e.sub.k.sup.1=[e.sub.k.sup.11,e.sub.k.sup.12,e.sub.k.sup.13]
e.sub.k.sup.2=[e.sub.k.sup.21,e.sub.k.sup.22,e.sub.k.sup.23]
e.sub.k.sup.3=[e.sub.k.sup.31,e.sub.k.sup.32,e.sub.k.sup.33] (17)
By expanding the cross product and dropping terms that evaluate to zero, it is found that
p.sub.k′=[t.sub.k.sup.3ºe.sub.k.sup.21−t.sub.k.sup.1ºe.sub.k.sup.23,t.sub.k.sup.1ºe.sub.k.sup.22−t.sub.k.sup.2ºe.sub.k.sup.21]
q.sub.k′=[t.sub.k.sup.3ºe.sub.k.sup.11−t.sub.k.sup.1ºe.sub.k.sup.13,t.sub.k.sup.1ºe.sub.k.sup.12−t.sub.k.sup.2ºe.sub.k.sup.11] (18)
Equation (18) is trivial to compute in hardware, since it only requires multiplications and subtractions.
(48) Accordingly, an efficient, hardware-friendly dithering TB method of the present invention can be summarized as follows: 1. Determine (offline) the convex hull of the device color gamut and the corresponding edges and normal vectors of the triangles comprising the convex hull; 2. For all k triangles in the convex hull, compute Equation (5) to determine if the EMIC u lies outside the convex hull; 3. For a color u lying outside the convex hull: a. For all k triangles in the convex hull, compute Equations (12), (18), (2), (3), (6) and (13); b. Determine the one triangle j which satisfies all conditions of Equation (10); c. For triangle j, compute the projected color u′ and the associated barycentric weights from Equations (15) and (16) and choose as the dithered color the vertex corresponding to the maximum barycentric weight; 4. For a color (EMIC) inside the convex hull, determine the “nearest” primary color from the primaries, where “nearest” is calculated as a Euclidean distance in the color space, and use the nearest primary as the dithered color.
(49) From the foregoing, it will be seen that the TB variant of the present method imposes much lower computations requirements than the variants previously discussed, thus allowing the necessary dithering to be deployed in relatively modest hardware.
(50) However, further computational efficiencies are possible as follows: For out of gamut colors, consider only computations against a small number of candidate boundary triangles. This is a significant improvement compared to previous method in which all gamut boundary triangles were considered; and For in-gamut colors, compute the “nearest neighbor” operation using a binary tree, which uses a precomputed binary space partition. This improves the computation time from O(N) to O(log N) where N is the number of primaries.
(51) The condition for a point u to be outside the convex hull has already been given in Equation (4) above. As already noted, the vertices v.sub.k and normal vectors can be precomputed and stored ahead of time. Equation (5) above can alternatively be written:
t′.sub.k=.Math.(u−v.sub.k) (5A)
and hence we know that only triangles k for which t′k<0 correspond to a u which is out of gamut. If all t.sub.k>0, then u is in gamut.
(52) The distance from a point u to the point where it intersects a triangle k is given by t.sub.k, where t.sub.k is given by Equation (12) above, with L being defined by Equation (11) above. Also, as discussed above, if u is outside the convex hull, it is necessary to define the projection operator which moves the point u back to the gamut surface The line along which we project in step 2(a) can be defined as a line which connects the input color u and V.sub.y, where
V.sub.y=w+α(w−b) (50)
and w, b are the respective white point and black point in opponent space. The scalar α is found from
(53)
where the subscript L refers to the lightness component. In other words, the line is defined as that which connects the input color and a point on the achromatic axis which has the same lightness. The direction of this line is given by Equation (6) above and the equation of the line can be written by Equation (7) above. The expression of a point within a triangle on the convex hull, the barycentric coordinates of such a point and the conditions for the projection line to intercept a particular triangle have already been discussed with reference to Equations (9)-(14) above.
(54) For reasons already discussed, it is desirable to avoid working with Equation (13) above since this requires a division operation. Also as already mentioned, u is out if gamut if any one of the k triangles has t′.sub.k<0, and, further, that since t′.sub.k<0 for triangles where u might be out of gamut, then L.sub.k must be always less than zero to allow 0<t′.sub.k<1 as required by condition (10). Where this condition holds, there is one, and only one, triangle for which the barycentric conditions hold. Therefore fork such that t′.sub.k<0 we must have
0>p′.sub.k≥L.sub.k,0>q.sub.k′≥L.sub.k0>p′.sub.k+q.sub.k′≥L.sub.k (52)
and
p.sub.k=−d.Math.p.sub.k′
q.sub.k=d.Math.q.sub.k′ (53)
which significantly reduces the decision logic compared to previous methods because the number of candidate triangles for which t′.sub.k<0 is small.
(55) In summary, then, an optimized method finds the k triangles where t′.sub.k<0 using Equation (5A), and only these triangles need to be tested further for intersection by Equation (52). For the triangle where Equation (52) holds, we test we calculate the new projected color u′ by Equation (15) where
(56)
which is a simple scalar division. Further, only the largest barycentric weight, max(α.sub.u) is of interest, from Equation (16):
max(α.sub.u)=min([L.sub.j−d.Math.p′.sub.j−d.Math.p′.sub.j,d.Math.p′.sub.j,d.Math.q′.sub.j]) (55)
and use this to select the vertex of the triangle j corresponding to the color to be output.
(57) If all t′.sub.k>0, then u is in-gamut, and above it was proposed o use a “nearest-neighbor” method to compute the primary output color. However, if the display has N primaries, the nearest neighbor method requires N computations of a Euclidean distance, which becomes a computational bottleneck.
(58) This bottleneck can be alleviated, if not eliminated by precompute a binary space partition for each of the blooming-modified primary spaces PP, then using a binary tree structure to determine the nearest primary to u in PP. Although this requires some upfront effort and data storage, it reduces the nearest-neighbor computation time from O(N) to O(log N).
(59) Thus, a highly efficient, hardware-friendly dithering method can be summarized (using the same nomenclature as previously) as: 1. Determine (offline) the convex hull of the device color gamut and the corresponding edges and normal vectors of the triangles comprising the convex hull; 2. Find the k triangles for which t′.sub.k<0, per Equation (5A). If any t′.sub.k<0, u is outside the convex hull, so: a. For the k triangles, find the triangle j which satisfies 3. For a color u lying outside the convex hull: a. For all k triangles in the convex hull, compute Equations (12), (18), (2), (3), (6) and (13); b. Determine the one triangle j which satisfies all conditions of Equation (10); c. For triangle j, compute the projected color u′ and the associated barycentric weights from Equations (15), (54) and (55) and choose as the dithered color the vertex corresponding to the maximum barycentric weight; 4. For a color (EMIC) inside the convex hull (all t′.sub.k>0), determine the “nearest” primary color, where “nearest” is calculated using a binary tree structure against a pre-computed binary space partition of the primaries.
(60) BNTB Method
(61) As already mentioned, the BNTB method differs from the TB described above by applies threshold modulation to the choice of dithering colors for EMIC outside the convex hull, while leaving the choice of dithering colors for EMIC inside the convex hull unchanged.
(62) A preferred form of the BNTB method a modification of the four-step preferred TB method described above; in the BNTB modification, Step 3c is replaced by Steps 3c and 3d as follows: c. For triangle j, compute the projected color u′ and the associated barycentric weights from Equations (15) and (16); and d. Compare the barycentric weights thus calculated with the values of a blue-noise mask at the pixel location, and choose as the dithered color the first vertex at which the cumulative sum of the barycentric weights exceeds the mask value.
(63) As is well known to those skilled in the imaging art, threshold modulation is simply a method of varying the choice of dithering color by applying a spatially-varying randomization to the color selection method. To reduce or prevent grain in the processed image, it is desirable to apply noise with preferentially shaped spectral characteristics, as for example in the blue-noise dither mask Tmn shown in
m=mod(x−1,M)+1
n=mod(y−1,M)+1 (19)
so that the dither mask is effectively tiled across the image.
(64) The threshold modulation exploits the fact that barycentric coordinates and probability density functions, such as a blue-noise function, both sum to unity. Accordingly, threshold modulation using a blue-noise mask may be effected by comparing the cumulative sum of the barycentric coordinates with the value of the blue-noise mask at a given pixel value to determine the triangle vertex and thus the dithered color.
(65) As noted above, the barycentric weights corresponding to the triangle vertices are given by:
α.sub.u=[1−p.sub.j−q.sub.j,p.sub.j,q.sub.j] (16)
so that the cumulative sum, denoted “CDF”, of these barycentric weights is given by:
CDF=[1−p.sub.j−q.sub.j,1−q.sub.j,1] (20)
and the vertex v, and corresponding dithered color, for which the CDF first exceeds the mask value at the relevant pixel, is given by:
v={v;CDF(v)≥T.sub.mn} (21)
(66) It is desirable that the BNTB method of the present invention be capable of being implemented efficiently on standalone hardware such as a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC), and for this purpose it is important to minimize the number of division operations required in the dithering calculations. For this purpose, Equation (16) above may be rewritten:
(67)
and Equation (20) may be rewritten:
(68)
or, to eliminate the division by L.sub.j:
CDF′=[L.sub.j−d.Math.p′.sub.j−d.Math.q′.sub.j,L.sub.j−d.Math..sub.q′j,L.sub.j] (24)
Equation (21) for selecting the vertex v, and the corresponding dithered color, at which the CDF first exceeds the mask value at the relevant pixel, becomes:
v={v;CDF′(v)≥T.sub.mnL.sub.j} (25)
Use of Equation (25) is only slightly complicated by the fact that both CDF′ and L.sub.j are now signed numbers. To allow for this complication, and for the fact that Equation (25) only requires two comparisons (since the last element of the CDF is unity, if the first two comparisons fail, the third vertex of the triangle must be chosen), Equation (25) can be implemented in a hardware-friendly manner using the following pseudo-code:
(69) TABLE-US-00001 v = 1 for i = 1 to 2
(70) The improvement in image quality which can be effected using the method of the present invention may readily be seen by comparison of
(71) From the foregoing, it will be seen that the BNTB provides a dithering method for color displays which provides better dithered image quality than the TB method and which can readily be effected on an FPGA, ASIC or other fixed-point hardware platform.
(72) NNGBC Method
(73) As already noted, the NNGBC method quantizes the projected color used for EMIC outside the convex hull by a nearest neighbor approach using gamut boundary colors only, while quantizing EMIC inside the convex hull by a nearest neighbor approach using all the available primaries.
(74) A preferred form of the NNGBC method can be described as a modification of the four-step TB method set out above. Step 1 is modified as follows: 1. Determine (offline) the convex hull of the device color gamut and the corresponding edges and normal vectors of the triangles comprising the convex hull. Also offline, of the N primary colors, find the M boundary colors P.sub.b, that is to say the primary colors that lie on the boundary of the convex hull (note that M<N); and Step 3c is replaced by: c. For triangle j, compute the projected color u′, and determine the “nearest” primary color from the M boundary colors P.sub.b, where “nearest” is calculated as a Euclidean distance in the color space, and use the nearest boundary color as the dithered color.
(75) The preferred form of the method of the present invention follows very closely the preferred four-step TB method described above, except that the barycentric weights do not need to be calculated using Equation (16). Instead, the dithered color v is chosen as the boundary color in the set P.sub.b that minimizes the Euclidean norm with u′, that is:
v=arg min.sub.v{∥u′−P.sub.b(v)∥} (26)
Since the number of boundary colors M is usually much smaller than the total number of primaries N, the calculations required by Equation (26) are relatively fast.
(76) As with the TB and BNTB methods of the present invention, it is desirable that the NNGBC method be capable of being implemented efficiently on standalone hardware such as a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC), and for this purpose it is important to minimize the number of division operations required in the dithering calculations. For this purpose, Equation (16) above may be rewritten in the form of Equation (22) as already described, and Equation (26)
(77) may treated in a similar manner.
(78) The improvement in image quality which can be effected using the method of the present invention may readily be seen by comparison of accompanying
(79) From the foregoing, it will be seen that the NNGBC method provides a dithering method for color displays which in general provides better dithered image quality that the TB method and can readily be effected on an FPGA, ASIC or other fixed-point hardware platform.
(80) DPH Method
(81) As already mentioned, the present invention provides a defective pixel hiding or DPH of the rendering methods already described, which further comprises: (i) identifying pixels of the display which fail to switch correctly, and the colors presented by such defective pixels; (ii) in the case of each defective pixel, outputting from step e the color actually presented by the defective pixel (or at least some approximation to this color); and (iii) in the case of each defective pixel, in step f calculating the difference between the modified or projected modified input value and the color actually presented by the defective pixel (or at least some approximation to this color).
References to “some approximation to this color” refer to the possibility that the color actually presented by the defective pixel may be considerably outside the color gamut of the display and may hence render the error diffusion method unstable. In such a case, it may be desirable to approximate the actual color of the defective pixel by one of the projection methods previously discussed.
(82) Since spatial dithering methods such as those of the present invention seek to deliver the impression of an average color given a set of discrete primaries, deviations of a pixel from its expected color can be compensated by appropriate modification of its neighbors. Taking this argument to its logical conclusion, it is clear that defective pixels (such as pixels stuck in a particular color) can also be compensated by the dithering method in a very straightforward manner. Hence, rather than set the output color associated with the pixel to the color determined by the dithering method, the output color is set to the actual color of the defective pixel so that the dithering method automatically account for the defect at that pixel by propagating the resultant error to the neighboring pixels. This variant of the dithering method can be coupled with an optical measurement to comprise a complete defective pixel measurement and repair process, which may be summarized as follows.
(83) First, optically inspect the display for defects; this may be as simple as taking a high-resolution photograph with some registration marks, and from the optical measurement, determine the location and color of the defective pixels. Pixels stuck in white or black colors may be located simply by inspecting the display when set to solid black and white respectively. More generally, however, one could measure each pixel when the display is set to solid white and solid black and determine the difference for each pixel. Any pixel for which this difference is below some predetermined threshold can be regarded as “stuck” and defective. To locate pixels in which one pixel is “locked” to the state of one of its neighbors, set the display to a pattern of one-pixel wide lines of black and white (using two separate images with the lines running along the row and columns respectively) and look for error in the line pattern.
(84) Next, build a lookup table of the defective pixels and their colors, and transfer this LUT to the dithering engine; for present purposes, it makes no difference whether the dithering method is performed in software or hardware. The dithering engine performs gamut mapping and dithering in the standard way, except that output colors corresponding to the locations of the defective pixels are forced to their defective colors. The dithering algorithm then automatically, and by definition, compensates for their presence.
(85)
(86) GD Method
(87) As already mentioned, the present invention provides a gamut delineation method for estimating an achievable gamut comprising five steps, namely: (1) measuring test patterns to derive information about cross-talk among adjacent primaries; (2) converting the measurements from step (1) to a blooming model that predicts the displayed color of arbitrary patterns of primaries; (3) using the blooming model derived in step (2) to predict actual display colors of patterns that would normally be used to produce colors on the convex hull of the primaries (i.e. the nominal gamut surface); (4) describing the realizable gamut surface using the predictions made in step (3); (5) using the realizable gamut surface model derived in step (4) in the gamut mapping stage of a color rendering process which maps input (source) colors to device colors.
(88) Steps (1) and (2) of this method may follow the process described above in connection with the basic color rendering method of the present invention. Specifically, for N primaries, “N choose 2” number of checkerboard patterns are displayed and measured. The difference between the nominal value expected from ideal color mixing laws and the actual measured value is ascribed to the edge interactions. This error is considered to be a linear function of edge density. By this means, the color of any pixel patch of primaries can be predicted by integrating these effects over all edges in the pattern.
(89) Step (3) of the method considers dither patterns one may expect on the gamut surface and computes the actual color predicted by the model. Generally speaking, a gamut surface is composed of triangular facets where the vertices are colors of the primaries in a linear color space. If there were no blooming, these colors in each of these triangles could then be reproduced by an appropriate fraction of the three associated vertex primaries. However, there are many patterns that can be made that have such a correct fraction of primaries, but which pattern is used is critical for the blooming model since primary adjacency types need to be enumerated. To understand this, consider these two extreme cases of using 50% of P1 and 50% of P2. At one extreme a checkerboard pattern of P1 and P2 can be used, in which case the P1|P2 edge density is maximal leading to the most possible deviation from ideal mixing. At another extreme is two very large patches, one of P1 and one of P2, which has a P1|P2 adjacency density that tends towards zero with increasing patch size. This second case will reproduce the nearly correct color even in the presence of blooming but will be visually unacceptable because of the coarseness of the pattern. If the half-toning algorithm used in capable of clustering pixels having the same color, it might be reasonable to choose some compromise between these extremes as the realizable color. However, in practice when using error diffusion this type of clustering leads to bad wormy artifacts, and furthermore the resolution of most limited palette displays, especially color electrophoretic displays, is such that clustering becomes obvious and distracting. Accordingly, it is generally desirable to use the most dispersed pattern possible even if that means eliminating some colors that could be obtained via clustering. Improvements in displays technology and half-toning algorithms may eventually render less conservative pattern models useful.
(90) In one embodiment, let P.sub.1, P.sub.2, P.sub.3 be the colors of three primaries that define a triangular facet on the surface of the gamut. Any color on this facet can be represented field by the linear combination
∝.sub.1P.sub.1+∝.sub.2P.sub.2+∝.sub.3P.sub.3 where ∝.sub.1+∝.sub.2+∝.sub.3=1.
(91) Now let Δ.sub.1,2, Δ.sub.1,3, Δ.sub.2,3 be the model for the color deviation due to blooming if all primary adjacencies in the pattern are of the numbered type, i.e. a checkerboard pattern of P.sub.1, P.sub.2 pixels is predicted to have the color
C=½P.sub.1+½P.sub.2+Δ.sub.1,2
Without loss of generality, assume
∝.sub.1≥∝.sub.2≥∝.sub.3
which defines a sub-triangle on the facet with corners
(1,0,0),(½,½,0),(⅓,⅓,⅓)
For maximally dispersed pixel populations of the primaries we can evaluate the predicted color at each of those corners to be
P.sub.1
½P.sub.1+½P.sub.2+Δ.sub.1,2
⅓(P.sub.1+P.sub.2+P.sub.3+Δ.sub.12+Δ.sub.13+Δ.sub.2,3)
By assuming our patterns can be designed to alter the edge density linearly between these corners, we now have a model for a sub-facet of the gamut boundary. Since there are 6 ways of ordering ∝.sub.1, ∝.sub.2, ∝.sub.3, there are six such sub-facets that replace each facet of the nominal gamut boundary description.
(92) It should appreciated that other approaches may be adopted. For example, a random primary placement model could be used, which is less dispersed that the one mentioned above. In this case the fraction of edges of each type is proportional to their probabilities, i.e. the fraction of P1|P2 edges is given by the product ∝.sub.1∝.sub.2. Since this is nonlinear in the ∝.sub.i, the new surface representing the gamut boundary would need to be triangulated or passed to subsequent steps as a parameterization.
(93) Another approach, which does not follow the paradigm just delineated, is an empirical approach—to actually use the blooming compensated dithering algorithm (using the model from steps 1,2) to determine which colors should be excluded from the gamut model. This can be accomplished by turning off the stabilization in the dithering algorithm and then trying to dither a constant patch of a single color. If an instability criterion is met (i.e. run-away error terms), then this color is excluded from the gamut. By starting with the nominal gamut, a divide and conquer approach could be used to determine the realizable gamut.
(94) In step (4) of the GD method, each of these sub-facets is represented as a triangle, with the vertices ordered such that the right-hand rule will point the normal vector according to a chosen convention for inside/outside facing. The collection of all these triangles forms a new continuous surface representing the realizable gamut.
(95) In some cases, the model will predict that new colors not in the nominal gamut can be realized by exploiting blooming; however, most effects are negative in the sense of reducing the realizable gamut. For example, the blooming model gamut may exhibit deep concavities, meaning that some colors deep inside the nominal gamut cannot in fact be reproduced on the display, as illustrated for example in
(96) TABLE-US-00002 TABLE 1 Vertices in L*a*b* color space Vertex No. L* a* b* 1 22.291 −7.8581 −3.4882 2 24.6135 8.4699 −31.4662 3 27.049 −9.0957 −2.8963 4 30.0691 7.8556 5.3628 5 23.6195 19.5565 −24.541 6 31.4247 −10.4504 −1.8987 7 29.4472 6.0652 −35.5804 8 27.5735 19.3381 −35.7121 9 50.1158 −30.1506 34.1525 10 35.2752 −11.0676 −1.4431 11 35.8001 −14.8328 −16.0211 12 46.8575 −10.8659 22.0569 13 34.0596 13.1111 8.4255 14 33.8706 −2.611 −28.3529 15 39.7442 27.2031 −14.4892 16 41.4924 8.7628 −32.8044 17 35.0507 34.0584 −23.6601 18 48.5173 −11.361 3.1187 19 39.9753 15.7975 16.1817 20 50.218 10.6861 7.9466 21 52.6132 −10.8092 4.8362 22 54.879 22.7288 −15.4245 23 61.7716 −20.2627 45.8727 24 57.1284 −10.2686 7.9435 25 54.7161 −28.9697 32.0898 26 67.6448 −16.0817 55.0921 27 60.4544 −22.4697 40.1991 28 48.5841 −11.9172 −18.778 29 58.6893 −11.4884 −10.7047 30 72.801 −11.3746 68.2747 31 73.8139 −6.8858 21.3934 32 77.8384 −3.0633 4.755 33 24.5385 −2.1532 −14.8931 34 31.1843 −8.6054 −13.5995 35 28.5568 7.5707 −35.4951 36 28.261 −1.065 −22.3647 37 27.7753 −11.4851 −5.3461 38 26.0366 5.0496 −9.9752 39 28.181 11.3641 −11.3759 40 27.3508 2.1064 −8.9636 41 26.0366 5.0496 −9.9752 42 24.5385 −2.1532 −14.8931 43 24.3563 11.1725 −27.3764 44 24.991 4.8394 −17.8547 45 31.1843 −8.6054 −13.5995 46 34.0968 −17.4657 −4.7492 47 33.8863 −7.6695 −26.5748 48 33.0914 −11.2605 −15.7998 49 41.6637 −22.0771 21.0693 50 51.4872 −17.2377 34.7964 51 68.5237 −14.4392 62.7905 52 55.6386 −16.4599 42.5188 53 34.0968 −17.4657 −4.7492 54 41.6637 −22.0771 21.0693 55 61.5571 −16.2463 24.6821 56 47.9334 −17.4314 15.7021 57 51.4872 −17.2377 34.7964 58 27.7753 −11.4851 −5.3461 59 56.1967 −8.2037 34.2338 60 47.4842 −11.7712 25.028 61 24.3563 11.1725 −27.3764 62 28.0951 11.5692 −34.9293 63 25.5771 13.6758 −27.7731 64 26.0674 12.125 −30.2923 65 28.0951 11.5692 −34.9293 66 28.5568 7.5707 −35.4951 67 30.339 12.3612 −36.266 68 29.0178 10.5573 −35.5705 69 30.323 10.437 6.7394 70 28.181 11.3641 −11.3759 71 30.4451 14.0796 −12.8243 72 29.6732 11.9871 −6.5836 73 33.8423 10.4188 8.9198 74 30.323 10.437 6.7394 75 35.883 14.1544 11.7358 76 33.4556 11.781 9.2613 77 56.1967 −8.2037 34.2338 78 33.8423 10.4188 8.9198 79 59.6655 −5.5683 39.5248 80 51.7599 −3.3654 30.2979 81 30.4451 14.0796 −12.8243 82 27.3573 18.8007 −15.1756 83 33.9073 13.4649 −4.9512 84 30.7233 15.2007 −10.7358 85 27.3573 18.8007 −15.1756 86 25.5771 13.6758 −27.7731 87 33.7489 18.357 −18.113 88 29.171 17.0731 −20.2198 89 30.339 12.3612 −36.266 90 36.4156 7.3908 −35.0008 91 33.9715 12.248 −35.5009 92 33.7003 10.484 −35.4918 93 32.5384 −10.242 −19.3507 94 33.8863 −7.6695 −26.5748 95 35.4459 −13.3151 −12.8828 96 33.9851 −10.4438 −19.7811 97 36.4156 7.3908 −35.0008 98 42.6305 −13.8758 −19.1021 99 52.4137 −10.9691 −15.164 100 44.5431 −6.873 −22.0661 101 42.6305 −13.8758 −19.1021 102 32.5384 −10.242 −19.3507 103 41.1048 −10.6184 −20.3348 104 39.1096 −11.6772 −19.5092 105 33.7489 18.357 −18.113 106 33.9715 12.248 −35.5009 107 50.7411 7.9808 2.7416 108 40.6429 11.7224 −15.4312 109 61.5571 −16.2463 24.6821 110 68.272 −17.4757 23.2992 111 44.324 −16.9442 −14.8592 112 59.3712 −16.6207 13.0583 113 70.187 −15.8627 46.0122 114 71.2057 −14.3755 54.4062 115 66.3232 −19.124 46.5526 116 69.2902 −16.3318 48.9694 117 71.2057 −14.3755 54.4062 118 68.5237 −14.4392 62.7905 119 73.7328 −12.8894 57.8616 120 71.2059 −13.8595 58.0118 121 68.272 −17.4757 23.2992 122 70.187 −15.8627 46.0122 123 56.5793 −20.2568 −1.2576 124 65.4497 −17.491 22.5467 125 35.4459 −13.3151 −12.8828 126 44.324 −16.9442 −14.8592 127 41.1048 −10.6184 −20.3348 128 40.5281 −13.6957 −16.1894 129 35.883 14.1544 11.7358 130 33.9073 13.4649 −4.9512 131 39.4166 14.4644 −3.2296 132 36.5017 14.0353 0.5249 133 35.5893 24.9129 −13.9743 134 38.2881 13.7332 0.4361 135 39.4166 14.4644 −3.2296 136 37.8123 17.5283 −5.669 137 38.2881 13.7332 0.4361 138 48.3592 19.9753 −8.4475 139 44.6063 12.12 0.9232 140 44.0368 15.5418 −2.9731 141 48.3592 19.9753 −8.4475 142 35.5893 24.9129 −13.9743 143 43.5227 23.2087 −13.3264 144 42.9564 22.2354 −11.5525 145 50.7411 7.9808 2.7416 146 64.0938 0.7047 0.487 147 43.5227 23.2087 −13.3264 148 53.8404 8.6963 −2.5804 149 64.0938 0.7047 0.487 150 69.4971 −4.1119 4.003 151 69.4668 3.5962 −1.2731 152 67.7624 0.0633 1.0628 153 67.976 −4.7811 −2.0047 154 52.4137 −10.9691 −15.164 155 67.7971 −4.4098 −4.287 156 63.3845 −6.1019 −6.3559 157 69.4971 −4.1119 4.003 158 67.976 −4.7811 −2.0047 159 75.3716 −3.1913 3.7853 160 71.0659 −3.9741 2.0049 161 59.6655 −5.5683 39.5248 162 44.6063 12.12 0.9232 163 72.0031 −7.6835 37.1168 164 60.3911 −2.4765 27.772 165 72.0031 −7.6835 37.1168 166 69.4668 3.5962 −1.2731 167 75.33 −10.9118 39.9331 168 72.332 −5.2103 23.481 169 60.94 −23.5693 41.4224 170 66.3232 −19.124 46.5526 171 68.8066 −17.1536 49.0911 172 65.4882 −19.6672 45.8512 173 56.5793 −20.2568 −1.2576 174 74.5326 −10.6115 21.3102 175 67.7971 −4.4098 −4.287 176 66.9582 −10.741 5.7604 177 74.5326 −10.6115 21.3102 178 74.3218 −10.489 25.379 179 75.3716 −3.1913 3.7853 180 74.7443 −8.0307 16.0839 181 74.3218 −10.489 25.379 182 60.94 −23.5693 41.4224 183 74.2638 −10.0199 26.0654 184 70.2931 −13.5922 29.0524 185 68.8066 −17.1536 49.0911 186 74.7543 −10.0079 31.1476 187 74.2638 −10.0199 26.0654 188 72.6896 −12.1441 33.8812 189 74.7543 −10.0079 31.1476 190 73.7328 −12.8894 57.8616 191 75.33 −10.9118 39.9331 192 74.6105 −11.2513 41.7499
(97) TABLE-US-00003 TABLE 1 Triangles forming hull 1 33 36 2 36 33 2 35 36 7 36 35 7 34 36 1 36 34 1 37 40 4 40 37 4 39 40 5 40 39 5 38 40 1 40 38 1 41 44 5 44 41 5 43 44 2 44 43 2 42 44 1 44 42 1 45 48 7 48 45 7 47 48 11 48 47 11 46 48 1 48 46 1 49 52 9 52 49 9 51 52 30 52 51 30 50 52 1 52 50 1 53 56 11 56 53 11 55 56 9 56 55 9 54 56 1 56 54 1 57 60 30 60 57 30 59 60 4 60 59 4 58 60 1 60 58 2 61 64 5 64 61 5 63 64 8 64 63 8 62 64 2 64 62 2 65 68 8 68 65 8 67 68 7 68 67 7 66 68 2 68 66 4 69 72 13 72 69 13 71 72 5 72 71 5 70 72 4 72 70 4 73 76 19 76 73 19 75 76 13 76 75 13 74 76 4 76 74 4 77 80 30 80 77 30 79 80 19 80 79 19 78 80 4 80 78 5 81 84 13 84 81 13 83 84 17 84 83 17 82 84 5 84 82 5 85 88 17 88 85 17 87 88 8 88 87 8 86 88 5 88 86 7 89 92 8 92 89 8 91 92 16 92 91 16 90 92 7 92 90 7 93 96 14 96 93 14 95 96 11 96 95 11 94 96 7 96 94 7 97 100 16 100 97 16 99 100 28 100 99 28 98 100 7 100 98 7 101 104 28 104 101 28 103 104 14 104 103 14 102 104 7 104 102 8 105 108 17 108 105 17 107 108 16 108 107 16 106 108 8 108 106 9 109 112 11 112 109 11 111 112 28 112 111 28 110 112 9 112 110 9 113 116 25 116 113 25 115 116 26 116 115 26 114 116 9 116 114 9 117 120 26 120 117 26 119 120 30 120 119 30 118 120 9 120 118 9 121 124 28 124 121 28 123 124 25 124 123 25 122 124 9 124 122 11 125 128 14 128 125 14 127 128 28 128 127 28 126 128 11 128 126 13 129 132 19 132 129 19 131 132 17 132 131 17 130 132 13 132 130 15 133 136 17 136 133 17 135 136 19 136 135 19 134 136 15 136 134 15 137 140 19 140 137 19 139 140 22 140 139 22 138 140 15 140 138 15 141 144 22 144 141 22 143 144 17 144 143 17 142 144 15 144 142 16 145 148 17 148 145 17 147 148 22 148 147 22 146 148 16 148 146 16 149 152 22 152 149 22 151 152 32 152 151 32 150 152 16 152 150 16 153 156 29 156 153 29 155 156 28 156 155 28 154 156 16 156 154 16 157 160 32 160 157 32 159 160 29 160 159 29 158 160 16 160 158 19 161 164 30 164 161 30 163 164 22 164 163 22 162 164 19 164 162 22 165 168 30 168 165 30 167 168 32 168 167 32 166 168 22 168 166 25 169 172 27 172 169 27 171 172 26 172 171 26 170 172 25 172 170 25 173 176 28 176 173 28 175 176 29 176 175 29 174 176 25 176 174 25 177 180 29 180 177 29 179 180 32 180 179 32 178 180 25 180 178 25 181 184 32 184 181 32 183 184 27 184 183 27 182 184 25 184 182 26 185 188 27 188 185 27 187 188 32 188 187 32 186 188 26 188 186 26 189 192 32 192 189 32 191 192 30 192 191 30 190 192 26 192 190
(98) This may lead to some quandaries for gamut mapping, as described below. Also, the gamut model produced can be self-intersecting and thus not have simple topological properties. Since the method described above only operates on the gamut boundary, it does not allow for cases where colors inside the nominal gamut (for example an embedded primary) appear outside the modeled gamut boundary, when in fact they are realizable. To solve this problem, it may be necessary to consider all tetrahedra in the gamut and how their sub-tetrahedra are mapped under the blooming model.
(99) In step (5) the realizable gamut surface model generated in step (4) is used in the gamut mapping stage of a color image rendering process, one may follow a standard gamut mapping procedure that is modified in one or more steps to account for the non-convex nature of the gamut boundary.
(100) The GD method is desirably carried out in a three-dimensional color space in which hue (h*), lightness (L*) and chroma (C*) are independent. Since this is not the case for the L*a*b* color space, the (L*, a*, b*) samples derived from the gamut model should be transformed to a hue-linearized color space such as the CIECAM or Munsell space. However, the following discussion will maintain the (L*, a*, b*) nomenclature with
C*=√{square root over (a*.sup.2+b*.sup.2)} and
h*=a tan(b*/a*).
A gamut delineated as described above may then be used for gamut mapping. In an appropriate color space, source colors may be mapped to destination (device) colors by considering the gamut boundaries corresponding to a given hue angle h*. This can be achieved by computing the intersection of a plane at angle h* with the gamut model as shown in
(101) In standard gamut mapping schemes, a source color is mapped to a point on or inside the destination gamut boundary. There are many possible strategies for achieving this mapping, such as projecting along the C* axis or projecting towards a constant point on the L* axis, and it is not necessary to discuss this matter in greater detail here. However, since the boundary of the destination gamut may now be highly irregular (see
(102) This smoothing operation may begin by inflating the source gamut boundary. To do this, define a point R on the L* axis, which is taken to be the mean of the L* values of the source gamut. The Euclidean distance D between points on the gamut and R, the normal vector d, and the maximum value of D which we denote D.sub.max, may then be calculated. One can then calculate
(103)
where γ is a constant to control the degree of smoothing; the new C* and L* points corresponding to the inflated gamut boundary are then
C*′=D′d and
L*′=R+D′d.
(104) If we now take the convex hull of the inflated gamut boundary, and then effect a reverse transformation to obtain C* and L*, a smoothed gamut boundary is produced. As illustrated in
(105) The mapped color may now be calculated by:
a*=C*cos(h*) and
b*=C*cos(h*)
and the (L*, a*, b*) coordinates can if desired be transformed back to the sRGB system.
(106) This gamut mapping process is repeated for all colors in the source gamut, so that one can obtain a one-to-one mapping for source to destination colors. Preferably, one may sample 9×9×9=729 evenly-spaced colors in the sRGB source gamut; this is simply a convenience for hardware implementation.
(107) DHHG Method
(108) A DHHG method according to one embodiment of the present invention is illustrated in
(109) 1. Degamma Operation
(110) In a first step of the method, a degamma operation (1) is applied to remove the power-law encoding in the input data associated with the input image (6), so that all subsequent color processing operations apply to linear pixel values. The degamma operation is preferably accomplished by using a 256-element lookup table (LUT) containing 16-bit values, which is addressed by an 8-bit sRGB input which is typically in the sRGB color space. Alternatively, if the display processor hardware allows, the operation could be performed by using an analytical formula. For example, the analytic definition of the sRGB degamma operation is
(111)
where a=0.055, C corresponds to red, green or blue pixel values and C′ are the corresponding de-gamma pixel values.
(112) 2. HDR-Type Processing
(113) For color electrophoretic displays having a dithered architecture, dither artifacts at low greyscale values are often visible. This may be exacerbated upon application of a degamma operation, because the input RGB pixel values are effectively raised to an exponent of greater than unity by the degamma step. This has the effect of shifting pixel values to lower values, where dither artifacts become more visible.
(114) To reduce the impact of these artifacts, it is preferable to employ tone-correction methods that act, either locally or globally, to increase the pixel values in dark areas. Such methods are well known to those of skill in the art in high-dynamic range (HDR) processing architectures, in which images captured or rendered with a very wide dynamic range are subsequently rendered for display on a low dynamic range display. Matching the dynamic range of the content and display is achieved by tone mapping, and often results in brightening of dark parts of the scene in order to prevent loss of detail.
(115) Thus, it is an aspect of the HDR-type processing step (2) to treat the source sRGB content as HDR with respect to the color electrophoretic display so that the chance of objectionable dither artifacts in dark areas is minimized. Further, the types of color enhancement performed by HDR algorithms may provide the added benefit of maximizing color appearance for a color electrophoretic display.
(116) As noted above, HDR rendering algorithms are known to those skilled in the art. The HDR-type processing step (2) in the methods according to the various embodiments of the present invention preferably contains as its constituent parts local tone mapping, chromatic adaptation, and local color enhancement. One example of an HDR rendering algorithm that may be employed as an HDR-type processing step is a variant of iCAM06, which is described in Kuang, Jiangtao et al. “iCAM06: A refined image appearance model for HDR image rendering.” J. Vis. Commun. Image R. 18 (2007): 406-414, the entire contents of which are incorporated herein by reference.
(117) It is typical for HDR-type algorithms to employ some information about the environment, such as scene luminance or viewer adaptation. As illustrated in
(118) 3. Hue Correction
(119) Because HDR rendering algorithms may employ physical visual models, the algorithms can be prone to modifying the hue of the output image, such that it substantially differs from the hue of the original input image. This can be particularly noticeable in images containing memory colors. To prevent this effect, the methods according to the various embodiments of the present invention may include a hue correction stage (3) to ensure that the output of the HDR-type processing (2) has the same hue angle as the sRGB content of the input image (6). Hue correction algorithms are known to those of skill in the art. One example of a hue correction algorithm that may be employed in the hue correction stage (3) in the various embodiments of the present invention is described by Pouli, Tania et al. “Color Correction for Tone Reproduction” CIC21: Twenty-first Color and Imaging Conference, page 215-220-November 2013, the entire contents of which are incorporated herein by reference.
(120) 4. Gamut Mapping
(121) Because the color gamut of a color electrophoretic display may be significantly smaller than the sRGB input of the input image (6), a gamut mapping stage (4) is included in the methods according the various embodiments of the present invention to map the input content into the color space of the display. The gamut mapping stage (4) may comprise a chromatic adaptation model (9) in which a number of nominal primaries (10) are assumed to constitute the gamut or a more complex model (11) involving adjacent pixel interaction (“blooming”).
(122) In one embodiment of the present invention, a gamut-mapped image is preferably derived from the sRGB-gamut input by means of a three-dimensional lookup table (3D LUT), such as the process described in Henry Kang, “Computational color technology”, SPIE Press, 2006, the entire contents of which are incorporated herein by reference. Generally, the Gamut mapping stage (4) may be achieved by an offline transformation on discrete samples defined on source and destination gamuts, and the resulting transformed values are used to populate the 3D LUT. In one implementation, a 3D LUT which is 729 RGB elements long and uses a tetrahedral interpolation technique may be employed, such as the following example.
EXAMPLE
(123) To obtain the transformed values for the 3D LUT, an evenly spaced set of sample points (R, G, B) in the source gamut is defined, where each of these (R, G, B) triples corresponds to an equivalent triple, (R′, G′, B′), in the output gamut. To find the relationship between (R, G, B) and (R′, G′, B′) at points other than the sampling points, i.e. “arbitrary points”, interpolation may be employed, preferably tetrahedral interpolation as described in greater detail below.
(124) For example, referring to
(125) Interpolation within a subcube can be achieved by a number of methods. In a preferred method according to an embodiment of the present invention tetrahedral interpolation is utilized. Because a cube can be constructed from six tetrahedrons (see
(126) The barycentric representation of a three-dimensional point in a tetrahedron with vertices v.sub.1,2,3,4 is found by computing weights α.sub.1,2,3,4/α.sub.0 where
(127)
and |.Math.| is the determinant. Because α.sub.0=1, the barycentric representation is provided by Equation (33)
(128)
Equation (33) provides the weights used to express RGB in terms of the tetrahedron vertices of the input gamut. Thus, the same weights can be used to interpolate between the R′G′B′ values at those vertices. Because the correspondence between the RGB and R′G′B′ vertex values provides the values to populate the 3D LUT, Equation (33) may be converted to Equation (34):
(129)
where LUT(v.sub.1,2,3,4) are the RGB values of the output color space at the sampling vertices used for the input color space.
(130) For hardware implementation, the input and output color spaces are sampled using n.sup.3 vertices, which requires (n−1).sup.3 unit cubes. In a preferred embodiment, n=9 to provide a reasonable compromise between interpolation accuracy and computational complexity. The hardware implementation may proceed according to the following steps:
(131) 1.1 Finding the Subcube
(132) First, the enclosing subcube triple, RGB.sub.0, is found by computing
(133)
where RGB is the input RGB triple and └.Math.┘ the floor operator and 1≤i≤3. The offset within the cube, rgb, is then found from
(134)
wherein, 0≤RGB.sub.0(i)≤7 and 0≤rgb(i)≤31, if n=9.
(135) 1.2 Barycentric Computations
(136) Because the tetrahedron vertices v.sub.1,2,3,4 are known in advance, Equations (28)-(34) may be simplified by computing the determinants explicitly. Only one of six cases needs to be computed:
rgb(1)>rgb(2) and rgb(3)>rgb(1)
α=[32−rgb(3)rgb(3)−rgb(1)rgb(1)−rgb(2)rgb(2)]
v.sub.1=[0 0 0]
v.sub.2=[0 0 1]
v.sub.3=[1 0 1]
v.sub.4=[1 1 1] (37)
rgb(1)>rgb(2) and rgb(3)>rgb(2)
α=[32−rgb(1)rgb(1)−rgb(3)rgb(3)−rgb(2)rgb(2)]
v.sub.1=[0 0 0]
v.sub.2=[1 0 0]
v.sub.3=[1 0 1]
v.sub.4=[1 1 1] (38)
rgb(1)>rgb(2) and rgb(3)<rgb(2)
α=[32−rgb(1)rgb(1)−rgb(2)rgb(2)−rgb(3)rgb(3)]
v.sub.1=[0 0 0]
v.sub.2=[1 0 0]
v.sub.3=[1 1 0]
v.sub.4=[1 1 1] (39)
rgb(1)<rgb(2) and rgb(1)>rgb(3)
α=[32−rgb(2)rgb(2)−rgb(1)rgb(1)−rgb(3)rgb(3)]
v.sub.1=[0 0 0]
v.sub.2=[0 1 0]
v.sub.3=[0 1 1]
v.sub.4=[1 1 1] (40)
rgb(1)<rgb(2) and rgb(3)>rgb(2)
α=[32−rgb(3)rgb(3)−rgb(1)rgb(2)−rgb(1)rgb(1)]
v.sub.1=[0 0 0]
v.sub.2=[0 0 1]
v.sub.3=[0 1 1]
v.sub.4=[1 1 1] (41)
rgb(1)<rgb(2) and rgb(2)>rgb(3)
α=[32−rgb(2)rgb(2)−rgb(3)rgb(3)−rgb(1)rgb(1)]
v.sub.1=[0 0 0]
v.sub.2=[0 1 0]
v.sub.3=[0 1 1]
v.sub.4=[1 1 1] (42)
(137) 1.3 LUT Indexing
(138) Because the input color space samples are evenly spaced, the corresponding destination color space samples contained in the 3D LUT, LUT(v1,2,3,4), are provided according to Equations (43),
LUT(v.sub.1)=LUT(81×RGB.sub.0(1)+9×RGB.sub.0(2)+RGB.sub.0(3)
LUT(v.sub.2)=LUT(81×(RGB.sub.0(1)+v.sub.2(1))+9×(RGB.sub.0(2)+v.sub.2(2))+(RG
LUT(v.sub.3)=LUT(81×(RGB.sub.0(1)+v.sub.3(1))+9×(RGB.sub.0(2)+v.sub.3(2))+(RG
LUT(v.sub.4)=LUT(81×(RGB.sub.0(1)+v.sub.4(1))+9×(RGB.sub.0(2)+v.sub.4(2))+(RG (43)
(139) 1.4 Interpolation
(140) In a final step, the R′ G′ B′ values may be determined from Equation (17),
(141)
(142) As noted above, a chromatic adaptation step (9) may also be incorporated into the processing pipeline to correct for display of white levels in the output image. The white point provided by the white pigment of a color electrophoretic display may be significantly different from the white point assumed in the color space of the input image. To address this difference, the display may either maintain the input color space white point, in which case the white state is dithered, or shift the color space white point to that of the white pigment. The latter operation is achieved by chromatic adaptation, and may substantially reduce dither noise in the white state at the expense of a white point shift.
(143) The Gamut mapping stage (4) may also be parameterized by the environmental conditions in which the display is used. The CIECAM color space, for example, contains parameters to account for both display and ambient brightness and degree of adaptation. Therefore, in one implementation, the Gamut mapping stage (4) may be controlled by environmental conditions data (8) from an external sensor.
(144) 5. Spatial Dither
(145) The final stage in the processing pipeline for the production of the output image data (12) is a spatial dither (5). Any of a number of spatial dithering algorithms known to those of skill in the art may be employed as the spatial dither stage (5) including, but not limited to those described above. When a dithered image is viewed at a sufficient distance, the individual colored pixels are merged by the human visual system into perceived uniform colors. Because of the trade-off between color depth and spatial resolution, dithered images, when viewed closely, have a characteristic graininess as compared to images in which the color palette available at each pixel location has the same depth as that required to render images on the display as a whole. However, dithering reduces the presence of color-banding which is often more objectionable than graininess, especially when viewed at a distance.
(146) Algorithms for assigning particular colors to particular pixels have been developed in order to avoid unpleasant patterns and textures in images rendered by dithering. Such algorithms may involve error diffusion, a technique in which error resulting from the difference between the color required at a certain pixel and the closest color in the per-pixel palette (i.e., the quantization residual) is distributed to neighboring pixels that have not yet been processed. European Patent No. 0677950 describes such techniques in detail, while U.S. Pat. No. 5,880,857 describes a metric for comparison of dithering techniques. U.S. Pat. No. 5,880,857 is incorporated herein by reference in its entirety.
(147) From the foregoing, it will be seen that DHHG method of the present invention differs from previous image rendering methods for color electrophoretic displays in at least two respects. Firstly, rendering methods according to the various embodiments of the present invention treat the image input data content as if it were a high dynamic range signal with respect to the narrow-gamut, low dynamic range nature of the color electrophoretic display so that a very wide range of content can be rendered without deleterious artifacts. Secondly, the rendering methods according to the various embodiments of the present invention provide alternate methods for adjusting the image output based on external environmental conditions as monitored by proximity or luminance sensors. This provides enhanced usability benefits—for example, the image processing is modified to account for the display being near/far to the viewer's face or the ambient conditions being dark or bright.
(148) Remote Image Rendering System
(149) As already mentioned, this invention provides an image rendering system including an electro-optic display (which may be an electrophoretic display, especially an electronic paper display) and a remote processor connected via a network. The display includes an environmental condition sensor, and is configured to provide environmental condition information to the remote processor via the network. The remote processor is configured to receive image data, receive environmental condition information from the display via the network, render the image data for display on the display under the reported environmental condition, thereby creating rendered image data, and transmit the rendered image data. In some embodiments, the image rendering system includes a layer of electrophoretic display material disposed between first and second electrodes, wherein at least one of the electrodes being light transmissive. The electrophoretic display medium typically includes charged pigment particles that move when an electric potential is applied between the electrodes. Often, the charged pigment particles comprise more than on color, for example, white, cyan, magenta, and yellow charged pigments. When four sets of charged particles are present, the first and third sets of particles may have a first charge polarity, and the second and fourth sets may have a second charge polarity. Furthermore, the first and third sets may have different charge magnitudes, while the second and fourth sets have different charge magnitudes.
(150) The invention is not limited to four particle electrophoretic displays, however. For example, the display may comprises a color filter array. The color filter array may be paired with a number of different media, for example, electrophoretic media, electrochromic media, reflective liquid crystals, or colored liquids, e.g., an electrowetting device. In some embodiments, an electrowetting device may not include a color filter array, but may include pixels of colored electrowetting liquids.
(151) In some embodiments, the environmental condition sensor senses a parameter selected from temperature, humidity, incident light intensity, and incident light spectrum. In some embodiments, the display is configured to receive the rendered image data transmitted by the remote processor and update the image on the display. In some embodiments, the rendered image data is received by a local host and then transmitted from the local host to the display. Sometimes, the rendered image data is transmitted from the local host to the electronic paper display wirelessly. Optionally, the local host additionally receives environmental condition information from the display wirelessly. In some instances, the local host additionally transmits the environmental condition information from the display to the remote processor. Typically, the remote processor is a server computer connected to the internet. In some embodiments, the image rendering system also includes a docking station configured to receive the rendered image data transmitted by the remote processor and update the image on the display when the display and the docking station are in contact.
(152) It should be noted that the changes in the rendering of the image dependent upon an environmental temperature parameter may include a change in the number of primaries with which the image is rendered. Blooming is a complicated function of the electrical permeability of various materials present in an electro-optic medium, the viscosity of the fluid (in the case of electrophoretic media) and other temperature-dependent properties, so, not surprisingly, blooming itself is strongly temperature dependent. It has been found empirically that color electrophoretic displays can operate effectively only within limited temperature ranges (typically of the order of 50 C.°) and that blooming can vary significantly over much smaller temperature intervals.
(153) It is well known to those skilled in electro-optic display technology that blooming can give rise to a change in the achievable display gamut because, at some spatially intermediate point between adjacent pixels using different dithered primaries, blooming can give rise to a color which deviates significantly from the expected average of the two. In production, this non-ideality can be handled by defining different display gamuts for different temperature range, each gamut accounting for the blooming strength at that temperature range. As the temperature changes and a new temperature range is entered, the rendering process should automatically re-render the image to account for the change in display gamut.
(154) As operating temperature increases, the contribution from blooming may become so severe that it is not possible to maintain adequate display performance using the same number of primaries as at lower temperature. Accordingly, the rendering methods and apparatus of the present invention may be arranged to that, as the sensed temperature varies, not only the display gamut but also the number of primaries is varied. At room temperature, for example, the methods may render an image using 32 primaries because the blooming contribution is manageable; at higher temperatures, for example, it may only be possible to use 16 primaries.
(155) In practice, a rendering system of the present invention can be provided with a number of differing pre-computed 3D lookup tables (3D LUTs) each corresponding to a nominal display gamut in a given temperature range, and for each temperature range with a list of P primaries, and a blooming model having P×P entries. As a temperature range threshold is crossed, the rendering engine is notified and the image is re-rendered according to the new gamut and list of primaries. Since the rendering method of the present invention can handle an arbitrary number of primaries, and any arbitrary blooming model, the use of multiple lookup tables, list of primaries and blooming models depending upon temperature provides an important degree of freedom for optimizing performance on rendering systems of the invention.
(156) Also as already mentioned, the invention provides an image rendering system including an electro-optic display, a local host, and a remote processor, wherein the three components are connected via a network. The local host includes an environmental condition sensor, and is configured to provide environmental condition information to the remote processor via the network. The remote processor is configured to receive image data, receive environmental condition information from the local host via the network, render the image data for display on the display under the reported environmental condition, thereby creating rendered image data, and transmit the rendered image data. In some embodiments, the image rendering system includes a layer of electrophoretic display medium disposed between first and second electrodes, at least one of the electrodes being light transmissive. In some embodiments, the local host may also send the image data to the remote processor.
(157) Also as already mentioned, the invention includes a docking station comprising an interface for coupling with an electro-optic display. The docking station is configured to receive rendered image data via a network and to update an image on the display with the rendered image data. Typically, the docking station includes a power supply for providing a plurality of voltages to an electronic paper display. In some embodiments, the power supply is configured to provide three different magnitudes of positive and of negative voltage in addition to a zero voltage.
(158) Thus, the invention provides a system for rendering image data for presentation on a display. Because the image rendering computations are done remotely (e.g., via a remote processor ore server, for example in the cloud) the amount of electronics needed for image presentation is reduced. Accordingly, a display for use in the system needs only the imaging medium, a backplane including pixels, a front plane, a small amount of cache, some power storage, and a network connection. In some instances, the display may interface through a physical connection, e.g., via a docking station or dongle. The remote processor will receive information about the environment of the electronic paper, for example, temperature. The environmental information is then input into a pipeline to produce a primary set for the display. Images received by the remote processor is then rendered for optimum viewing, i.e., rendered image data. The rendered image data are then sent to the display to create the image thereon.
(159) In a preferred embodiment, the imaging medium will be a colored electrophoretic display of the type described in U.S. Patent Publication Nos. 2016/0085132 and 2016/0091770, which describe a four particle system, typically comprising white, yellow, cyan, and magenta pigments. Each pigment has a unique combination of charge polarity and magnitude, for example +high, +low, −low, and −high. As shown in
(160) More specifically, when the cyan, magenta and yellow particles lie below the white particles (Situation [A] in
(161) It is possible that one subtractive primary color could be rendered by a particle that scatters light, so that the display would comprise two types of light-scattering particle, one of which would be white and another colored. In this case, however, the position of the light-scattering colored particle with respect to the other colored particles overlying the white particle would be important. For example, in rendering the color black (when all three colored particles lie over the white particles) the scattering colored particle cannot lie over the non-scattering colored particles (otherwise they will be partially or completely hidden behind the scattering particle and the color rendered will be that of the scattering colored particle, not black).
(162)
(163) Methods for electrophoretically arranging a plurality of different colored particles in “layers” as shown in
(164) A second phenomenon that may be employed to control the motion of a plurality of particles is hetero-aggregation between different pigment types; see, for example, US 2014/0092465. Such aggregation may be charge-mediated (Coulombic) or may arise as a result of, for example, hydrogen bonding or van der Waals interactions. The strength of the interaction may be influenced by choice of surface treatment of the pigment particles. For example, Coulombic interactions may be weakened when the closest distance of approach of oppositely-charged particles is maximized by a steric barrier (typically a polymer grafted or adsorbed to the surface of one or both particles). In media used in the systems of the present invention, such polymeric barriers are used on the first and second types of particles, and may or may not be used on the third and fourth types of particles.
(165) A third phenomenon that may be exploited to control the motion of a plurality of particles is voltage- or current-dependent mobility, as described in detail in the aforementioned application Ser. No. 14/277,107.
(166) The driving mechanisms to create the colors at the individual pixels are not straightforward, and typically involve a complex series of voltage pulses (a.k.a. waveforms) as shown in
(167) The greatest positive and negative voltages (designated ±Vmax in
(168) From these blue, yellow, black or white optical states, the other four primary colors may be obtained by moving only the second particles (in this case the cyan particles) relative to the first particles (in this case the white particles), which is achieved using the lowest applied voltages (designated ±Vmin in
(169) While these general principles are useful in the construction of waveforms to produce particular colors in displays of the present invention, in practice the ideal behavior described above may not be observed, and modifications to the basic scheme are desirably employed.
(170) A generic waveform embodying modifications of the basic principles described above is illustrated in
(171) There are four distinct phases in the generic waveform illustrated in
(172) The waveform shown in
(173) As described above, the generic waveform is intrinsically DC balanced, and this may be preferred in certain embodiments of the invention. Alternatively, the pulses in phase A may provide DC balance to a series of color transitions rather than to a single transition, in a manner similar to that provided in certain black and white displays of the prior art; see for example U.S. Pat. No. 7,453,445.
(174) In the second phase of the waveform (phase B in
(175) As described above, white may be rendered by a pulse or a plurality of pulses at −Vmid. In some cases, however, the white color produced in this way may be contaminated by the yellow pigment and appear pale yellow. In order to correct this color contamination, it may be necessary to introduce some pulses of a positive polarity. Thus, for example, white may be obtained by a single instance or a repetition of instances of a sequence of pulses comprising a pulse with length T.sub.1 and amplitude +Vmax or +Vmid followed by a pulse with length T.sub.2 and amplitude −Vmid, where T.sub.2>T.sub.1. The final pulse should be a negative pulse. In
(176) As described above, black may be obtained by a rendered by a pulse or a plurality of pulses (separated by periods of zero voltage) at +Vmid.
(177) As described above, magenta may be obtained by a single instance or a repetition of instances of a sequence of pulses comprising a pulse with length T.sub.3 and amplitude +Vmax or +Vmid, followed by a pulse with length T.sub.4 and amplitude −Vmid, where T.sub.4>T.sub.3. To produce magenta, the net impulse in this phase of the waveform should be more positive than the net impulse used to produce white. During the sequence of pulses used to produce magenta, the display will oscillate between states that are essentially blue and magenta. The color magenta will be preceded by a state of more negative a* and lower L* than the final magenta state.
(178) As described above, red may be obtained by a single instance or a repetition of instances of a sequence of pulses comprising a pulse with length T.sub.5 and amplitude +Vmax or +Vmid, followed by a pulse with length T.sub.6 and amplitude −Vmax or −Vmid. To produce red, the net impulse should be more positive than the net impulse used to produce white or yellow. Preferably, to produce red, the positive and negative voltages used are substantially of the same magnitude (either both Vmax or both Vmid), the length of the positive pulse is longer than the length of the negative pulse, and the final pulse is a negative pulse. During the sequence of pulses used to produce red, the display will oscillate between states that are essentially black and red. The color red will be preceded by a state of lower L*, lower a*, and lower b* than the final red state.
(179) Yellow may be obtained by a single instance or a repetition of instances of a sequence of pulses comprising a pulse with length T.sub.7 and amplitude +Vmax or +Vmid, followed by a pulse with length T.sub.8 and amplitude −Vmax. The final pulse should be a negative pulse. Alternatively, as described above, the color yellow may be obtained by a single pulse or a plurality of pulses at −Vmax.
(180) In the third phase of the waveform (phase C in
(181) Typically, cyan and green will be produced by a pulse sequence in which +Vmin must be used. This is because it is only at this minimum positive voltage that the cyan pigment can be moved independently of the magenta and yellow pigments relative to the white pigment. Such a motion of the cyan pigment is necessary to render cyan starting from white or green starting from yellow.
(182) Finally, in the fourth phase of the waveform (phase D in
(183) Although the display shown in
(184) In general, light colors are obtained in the same manner as dark colors, but using waveforms having slightly different net impulse in phases B and C. Thus, for example, light red, light green and light blue waveforms have a more negative net impulse in phases B and C than the corresponding red, green and blue waveforms, whereas dark cyan, dark magenta, and dark yellow have a more positive net impulse in phases B and C than the corresponding cyan, magenta and yellow waveforms. The change in net impulse may be achieved by altering the lengths of pulses, the number of pulses, or the magnitudes of pulses in phases B and C.
(185) Gray colors are typically achieved by a sequence of pulses oscillating between low or mid voltages.
(186) It will be clear to one of ordinary skill in the art that in a display of the invention driven using a thin-film transistor (TFT) array the available time increments on the abscissa of
(187) The generic waveform illustrated in
(188) Since the changes to the voltages supplied to the source drivers affect every pixel, the waveform needs to be modified accordingly, so that the waveform used to produce each color must be aligned with the voltages supplied. The addition of dithering and grayscales further complicates the set of image data that must be generated to produce the desired image.
(189) An exemplary pipeline for rendering image data (e.g., a bitmap file) has been described above with reference to
(190) A variety of alternative architectures are available, as evidenced by
(191) A “real world” embodiment is shown in
(192) When users decide to display an image on the “client” (the display), they open an application on their “host” (mobile device) and pick out the image they wish to display and the specific “client” they want to display it on. The “host” then polls that particular “client” for its unique device ID and metadata. As mentioned above, this transaction may be over a short range power sipping protocol like Bluetooth 4. Once the “host” has the device ID and metadata, it combines that with the user's authentication, and the image ID and sends it to the “print server” over a wireless connection.
(193) Having received the authentication, the image ID, the client ID and metadata, the “print server” then retrieves the image from a database. This database could be a distributed storage volume (like another cloud) or it could be internal to the “print server”. Images might have been previously uploaded to the image database by the user, or may be stock images or images available for purchase. Having retrieved the user-selected image from storage, the “print server” performs a rendering operation which modifies the retrieved image to display correctly on the “client”. The rendering operation may be performed on the “print server” or it may be accessed via a separate software protocol on a dedicated cloud based rendering server (offering a “rendering service”). It may also be resource efficient to render all the user's images ahead of time and store them in the image database itself. In that case the “print server” would simply have a LUT indexed by client metadata and retrieve the correct pre-rendered image. Having procured a rendered image, the “print server” will send this data back to the “host” and the “host” will communicate this information to the “client” via the same power sipping communication protocol described earlier.
(194) In the case of the four color electrophoretic system described with respect to
(195) On the “client” an image controller will take the processed image data, where it may be stored, placed into a queue for display, or directly displayed on the ACeP screen. After the display “printing” is complete the “client” will communicate appropriate metadata with the “host” and the “host” will relay that to the “print server”. All metadata will be logged in the data volume that stores the images.
(196)
(197) A variation on this embodiment which may be more suitable for electronic signage or shelf label applications revolves around removing the “host” from the transactions. In this embodiment the “print server” will communicate directly with the “client” over the internet.
(198) Certain specific embodiments will now be described. In one of these embodiments, the color information associated with particular waveforms that is an input to the image processing (as described above) will vary, as the waveforms that are chosen may depend upon the temperature of the ACeP module. Thus, the same user-selected image may result in several different processed images, each appropriate to a particular temperature range. One option is for the host to convey to the print server information about the temperature of the client, and for the client to receive only the appropriate image. Alternatively, the client might receive several processed images, each associated with a possible temperature range. Another possibility is that a mobile host might estimate the temperature of a nearby client using information extracted from its on-board temperature sensors and/or light sensors.
(199) In another embodiment, the waveform mode, or the image rendering mode, might be variable depending on the preference of the user. For example, the user might choose a high-contrast waveform/rendering option, or a high-speed, lower-contrast option. It might even be possible that a new waveform mode becomes available after the ACeP module has been installed. In these cases, metadata concerning waveform and/or rendering mode would be sent from the host to the print server, and once again appropriately processed images, possibly accompanied by waveforms, would be sent to the client.
(200) The host would be updated by a cloud server as to the available waveform modes and rendering modes.
(201) The location where ACeP module-specific information is stored may vary. This information may reside in the print server, indexed by, for example, a serial number that would be sent along with an image request from the host. Alternatively, this information may reside in the ACeP module itself.
(202) The information transmitted from the host to the print server may be encrypted, and the information relayed from the server to the rendering service may also be encrypted. The metadata may contain an encryption key to facilitate encryption and decryption.
(203) From the foregoing, it will be seen that the present invention can provide improved color in limited palette displays with fewer artifacts than are obtained using conventional error diffusion techniques. The present invention differs fundamentally from the prior art in adjusting the primaries prior to the quantization, whereas the prior art (as described above with reference to
(204) For further details of color display systems to which the present invention can be applied, the reader is directed to the aforementioned ECD patents (which also give detailed discussions of electrophoretic displays) and to the following patents and publications: U.S. Pat. Nos. 6,017,584; 6,545,797; 6,664,944; 6,788,452; 6,864,875; 6,914,714; 6,972,893; 7,038,656; 7,038,670; 7,046,228; 7,052,571; 7,075,502; 7,167,155; 7,385,751; 7,492,505; 7,667,684; 7,684,108; 7,791,789; 7,800,813; 7,821,702; 7,839,564; 7,910,175; 7,952,790; 7,956,841; 7,982,941; 8,040,594; 8,054,526; 8,098,418; 8,159,636; 8,213,076; 8,363,299; 8,422,116; 8,441,714; 8,441,716; 8,466,852; 8,503,063; 8,576,470; 8,576,475; 8,593,721; 8,605,354; 8,649,084; 8,670,174; 8,704,756; 8,717,664; 8,786,935; 8,797,634; 8,810,899; 8,830,559; 8,873,129; 8,902,153; 8,902,491; 8,917,439; 8,964,282; 9,013,783; 9,116,412; 9,146,439; 9,164,207; 9,170,467; 9,182,646; 9,195,111; 9,199,441; 9,268,191; 9,285,649; 9,293,511; 9,341,916; 9,360,733; 9,361,836; and 9,423,666; and U.S. Patent Applications Publication Nos. 2008/0043318; 2008/0048970; 2009/0225398; 2010/0156780; 2011/0043543; 2012/0326957; 2013/0242378; 2013/0278995; 2014/0055840; 2014/0078576; 2014/0340736; 2014/0362213; 2015/0103394; 2015/0118390; 2015/0124345; 2015/0198858; 2015/0234250; 2015/0268531; 2015/0301246; 2016/0011484; 2016/0026062; 2016/0048054; 2016/0116816; 2016/0116818; and 2016/0140909.
(205) It will be apparent to those skilled in the art that numerous changes and modifications can be made in the specific embodiments of the invention described above without departing from the scope of the invention. Accordingly, the whole of the foregoing description is to be interpreted in an illustrative and not in a limitative sense.