Display Device and System
20210141221 · 2021-05-13
Inventors
Cpc classification
G03H1/2294
PHYSICS
G03H2001/0825
PHYSICS
G03H2226/02
PHYSICS
G03H1/02
PHYSICS
G03H1/0808
PHYSICS
G03H2001/0224
PHYSICS
International classification
Abstract
A logic circuit comprising a logic sub-circuit arranged to output a stream, S1, of Fresnel lens values, F(x), of a Fresnel lens for display on [m×n] pixels of a pixelated display device. In a first step, the logic circuit is arranged to set an initial data value stored in a first data register unit of the logic sub-circuit to (a−k).sup.2 and set an initial data value stored in a second data register unit of the logic sub-circuit to a.sup.2−(a−k).sup.2. In a second step the logic circuit is arranged to read the initial data value stored in the first data register unit and the initial data value stored in the second data register unit in a first iteration, and to read the data value stored in the first data register unit in the preceding iteration and the data value stored in the second data register unit in the preceding iteration, in a further iteration. In a third step, the logic circuit is arranged to sum the data value read from the first data register unit and the data value read from the second data register unit to form x.sup.2. In a fourth step, the logic circuit is arranged to calculate F(x) based on x.sup.2. In a fifth step, the logic circuit is arranged to output F(x) as the next value in the stream of F(x) values. In a sixth step, the logic circuit is arranged to write x.sup.2 to the first data register unit. In a seventh step, the logic circuit is arranged to add 2k.sup.2 to the value stored in the second data register unit. In an eighth step, the logic circuit is arranged to perform further iterations that repeat the second to seventh steps for x=a+k, a+2k, a+3k . . . a+(n−1)k, wherein a is the starting value of x, k is an increment in x and F(a) is the first value of stream, S1.
Claims
1. A logic circuit comprising a logic sub-circuit arranged to output a stream, S1, of Fresnel lens values, F(x), of a Fresnel lens for display on [m×n] pixels of a pixelated display device, wherein the logic circuit is arranged to: (a) set an initial data value stored in a first data register unit of the logic sub-circuit to (a−k).sup.2 and set an initial data value stored in a second data register unit of the logic sub-circuit to a.sup.2−(a−k).sup.2; (b) in a first iteration, read the initial data value stored in the first data register unit and the initial data value stored in the second data register unit, or in a further iteration, read the data value stored in the first data register unit in the preceding iteration and the data value stored in the second data register unit in the preceding iteration; (c) sum the data value read from the first data register unit and the data value read from the second data register unit to form x.sup.2; (d) calculate F(x) based on x.sup.2; (e) output F(x) as the next value in the stream of F(x) values; (f) write x.sup.2 to the first data register unit; (g) add 2k.sup.2 to the value stored in the second data register unit; and (h) perform further iterations that repeat steps (b) to (g) for x=a+k, a+2k, a+3k . . . a+(n−1)k, wherein a is the starting value of x, k is an increment in x and F(a) is the first value of stream, S1.
2. A logic circuit as claimed in claim 1 comprising a plurality, k, of logic sub-circuits, wherein the plurality of logic sub-circuits are arranged in parallel and each logic sub-circuit is arranged to output a respective stream, S1, S2 . . . Sk, of Fresnel lens values, F(x), by performing steps (a) to (h) using a respective value of a, wherein the streams, S1, S2 . . . Sk, correspond to a=x.sub.1, x.sub.1+1, x.sub.1+2 . . . x.sub.1+(k−1), respectively.
3. A logic circuit as claimed in claim 1 wherein x.sub.1=−n/2 or x.sub.1=1−n/2.
4. A logic circuit as claimed in claim 1 wherein F(x) is calculated based on x.sup.2 using the following equation:
5. A logic circuit as claimed in claim 1 wherein the first data register unit comprises a first input register, a first data register and a first multiplexer for selecting between a data value stored in the first input register and a data value stored in the first data register, and the second data register unit comprises a second input register, a second data register and a second multiplexer for selecting between a data value stored in the second input register and a data value stored in the second data register, wherein the logic circuit is further arranged to: provide a reset signal to the first and second multiplexers in the first iteration of step (b), in order to select the initial data values stored in the respective first and second input registers, and not provide a reset signal to the first and second multiplexers in further iterations of step (b), in order to select the data values stored in the respective first and second data registers in the preceding iteration.
6. A logic circuit as claimed in claim 1 further arranged to output a stream of Fresnel lens values, F(y), of the Fresnel lens, wherein the logic circuit is arranged to perform the following steps iteratively for y=b, b+1, b+2, . . . (b+m−1): (i) if y=b, set an initial data value stored in a first further data register unit to (b−1).sup.2 and set an initial data value stored in a second further data register unit to b.sup.2−(b−1).sup.2; (j) if y=b, read the initial data value stored in the first further data register unit of the logic circuit and the initial data value stored in the second further data register unit of the logic circuit, or if y≠b, read the data value stored in the first further data register unit in the preceding iteration and the data value stored in the second further data register unit in the preceding iteration; (k) sum the data value read from the first further data register unit and the data value read from the second further data register unit to form y.sup.2; (l) calculate F(y) based on y.sup.2; (m) output F(y) as the next value in the stream of F(y) values; (n) write y.sup.2 to the first further data register unit; and (o) add two to the value stored in the second further data register unit, wherein b is the starting value of y and F(b) is the first value of the stream of Fresnel lens values, F(y).
7. A logic circuit as claimed in claim 6 wherein b=−m/2 or 1−m/2.
8. A logic circuit as claimed in claim 6 wherein the logic circuit is arranged to calculate F(y) based on y.sup.2 using the following equation:
9. A logic circuit as claimed in claim 6 wherein step (m) outputs the value F(y) as the next n values in the stream of F(y) values.
10. A logic circuit as claimed in claim 6 further arranged to sum each F(x) value with a corresponding F(y) value in order to form a stream of Fresnel lens values, F(x,y), for each pixel.
11. A logic device comprising the logic circuit as claimed in claim 10, wherein the device comprises an application specific integrated circuit, ASIC, or a programmable logic device, PLD, optionally a field programmable gate array, FPGA.
12. A holographic projector comprising: the device of claim 11; a pixelated display device arranged to display a light modulation pattern comprising the Fresnel lens pattern in accordance with the stream of Fresnel lens values, F(x,y); and a light source arranged to illuminate the light modulation pattern with light having a wavelength, λ.
13. A holographic projector as claimed in claim 12 wherein the device is further arranged to add the Fresnel lens values of the stream of Fresnel lens values to hologram pixel values of a stream of hologram pixel values to form a data stream of display values, wherein the light modulation pattern is formed in accordance with the stream of display values provided to the pixelated display device by the field programmable gate array.
14. A head-up display comprising the holographic projector of claim 12.
15. A method of streaming Fresnel lens values, F(x), for display on [m×n] pixels of a pixelated display device, the method comprising: (a) setting an initial data value stored in a first data register unit of a logic sub-circuit to (a−k).sup.2 and setting an initial data value stored in a second data register unit of the logic sub-circuit to a.sup.2−(a−k).sup.2; (b) in a first iteration, reading the initial data value stored in the first data register unit and the initial data value stored in the second data register unit, or in a further iteration, reading the data value stored in the first data register unit in the preceding iteration and the data value stored in the second data register unit in the preceding iteration; (c) summing the data value read from the first data register unit and the data value read from the second data register unit to form x.sup.2; (d) calculating F(x) based on x.sup.2; (e) outputting F(x) as the next value in the stream of F(x) values; (f) writing x.sup.2 to the first data register unit; (g) adding 2k.sup.2 to the value stored in the second data register unit; and (h) performing further iterations comprising repeating steps (b) to (g) for x=a+k, a+2k, a+3k . . . a+(n−1)k, wherein a is the starting value of x, k is an increment in x and F(a) is the first value of stream, S1.
16. A method of streaming Fresnel lens values, F(y), for display on [m×n] pixels of a pixelated display device, the method comprising performing the following steps iteratively for y=b, b+1, b+2, . . . (b+m−1): (i) if y=b, setting an initial data value stored in a first further data register unit of a further logic sub-circuit to (b−1).sup.2 and setting an initial data value stored in a second further data register unit of the further logic sub-circuit to b.sup.2−(b−1).sup.2; (j) if y=b, reading the initial data value stored in the first further data register unit and the initial data value stored in the second further data register unit, or if y #b, reading the data value stored in the first further data register unit in the preceding iteration and the data value stored in the second further data register unit in the preceding iteration; (k) summing the data value read from the first further data register unit and the data value read from the second further data register unit to form y.sup.2; (l) calculating F(y) based on y.sup.2; (m) outputting F(y) as the next value in the stream of F(y) values; (n) writing y.sup.2 to the first further data register unit; and (o) adding two to the value stored in the second further data register unit, wherein b is the starting value of y and F(b) is the first value of the stream of Fresnel lens values, F(y).
17. A method of streaming Fresnel lens values, F(x,y), for display on [m×n] pixels of a pixelated display device, the method comprising streaming Fresnel lens values, F(x), according to the method of claim 15; streaming Fresnel lens values, F(y), by method comprising performing the following steps iteratively for y=b, b+1, b+2, . . . (b+m−1): (p) if y=b, setting an initial data value stored in a first further data register unit of a further logic sub-circuit to (b−1).sup.2 and setting an initial data value stored in a second further data register unit of the further logic sub-circuit to b.sup.2−(b−1).sup.2; (q) if y=b, reading the initial data value stored in the first further data register unit and the initial data value stored in the second further data register unit, or if y≠b, reading the data value stored in the first further data register unit in the preceding iteration and the data value stored in the second further data register unit in the preceding iteration; (r) summing the data value read from the first further data register unit and the data value read from the second further data register unit to form y.sup.2; (s) calculating F(y) based on y.sup.2; (t) outputting F(y) as the next value in the stream of F(y) values; (u) writing y.sup.2 to the first further data register unit; and (v) adding two to the value stored in the second further data register unit, wherein b is the starting value of y and F(b) is the first value of the stream of Fresnel lens values, F(y); and summing each F(x) value of the stream of F(x) values with a corresponding F(y) value of the stream of F(y) values.
18. A method of holographic projection, the method comprising: streaming a stream of Fresnel lens values, F(x,y) according to the method of claim 17; displaying a light modulation pattern comprising a Fresnel lens pattern in accordance with the stream of Fresnel lens values, F(x,y) on [m×n] pixels of a pixelated display device, and illuminating the light modulation pattern with light having a wavelength, λ.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0040] Specific embodiments are described by way of example only with reference to the following figures:
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055] The same reference numbers will be used throughout the drawings to refer to the same or like parts.
DETAILED DESCRIPTION OF EMBODIMENTS
[0056] The present invention is not restricted to the embodiments described in the following but extends to the full scope of the appended claims. That is, the present invention may be embodied in different forms and should not be construed as limited to the described embodiments, which are set out for the purpose of illustration.
[0057] Terms of a singular form may include plural forms unless specified otherwise.
[0058] A structure described as being formed at an upper portion/lower portion of another structure or on/under the other structure should be construed as including a case where the structures contact each other and, moreover, a case where a third structure is disposed there between.
[0059] In describing a time relationship—for example, when the temporal order of events is described as “after”, “subsequent”, “next”, “before” or suchlike—the present disclosure should be taken to include continuous and non-continuous events unless otherwise specified. For example, the description should be taken to include a case which is not continuous unless wording such as “just”, “immediate” or “direct” is used.
[0060] Although the terms “first”, “second”, etc. may be used herein to describe various elements, these elements are not to be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the appended claims.
[0061] Features of different embodiments may be partially or overall coupled to or combined with each other, and may be variously inter-operated with each other. Some embodiments may be carried out independently from each other, or may be carried out together in co-dependent relationship.
[0062] Optical Configuration
[0063]
[0064] A light source 110, for example a laser or laser diode, is disposed to illuminate the SLM 140 via a collimating lens 111. The collimating lens causes a generally planar wavefront of light to be incident on the SLM. In
[0065] Notably, in this type of holography, each pixel of the hologram contributes to the whole reconstruction. There is not a one-to-one correlation between specific points (or image pixels) on the replay field and specific light-modulating elements (or hologram pixels). In other words, modulated light exiting the light-modulating layer is distributed across the replay field.
[0066] In these embodiments, the position of the holographic reconstruction in space is determined by the dioptric (focusing) power of the Fourier transform lens. In the embodiment shown in
[0067] Hologram Calculation
[0068] In some embodiments, the computer-generated hologram is a Fourier transform hologram, or simply a Fourier hologram or Fourier-based hologram, in which an image is reconstructed in the far field by utilising the Fourier transforming properties of a positive lens. The Fourier hologram is calculated by Fourier transforming the desired light field in the replay plane back to the lens plane. Computer-generated Fourier holograms may be calculated using Fourier transforms.
[0069] A Fourier transform hologram may be calculated using an algorithm such as the Gerchberg-Saxton algorithm. Furthermore, the Gerchberg-Saxton algorithm may be used to calculate a hologram in the Fourier domain (i.e. a Fourier transform hologram) from amplitude-only information in the spatial domain (such as a photograph). The phase information related to the object is effectively “retrieved” from the amplitude-only information in the spatial domain. In some embodiments, a computer-generated hologram is calculated from amplitude-only information using the Gerchberg-Saxton algorithm or a variation thereof.
[0070] The Gerchberg Saxton algorithm considers the situation when intensity cross-sections of a light beam, I.sub.A(x,y) and I.sub.B(x,y), in the planes A and B respectively, are known and I.sub.A(x,y) and I.sub.B(x,y) are related by a single Fourier transform. With the given intensity cross-sections, an approximation to the phase distribution in the planes A and B, ψ.sub.A(x,y) and ψ.sub.B(x,y) respectively, is found. The Gerchberg-Saxton algorithm finds solutions to this problem by following an iterative process. More specifically, the Gerchberg-Saxton algorithm iteratively applies spatial and spectral constraints while repeatedly transferring a data set (amplitude and phase), representative of I.sub.A(x,y) and I.sub.B(x,y), between the spatial domain and the Fourier (spectral or frequency) domain. The corresponding computer-generated hologram in the spectral domain is obtained through at least one iteration of the algorithm. The algorithm is convergent and arranged to produce a hologram representing an input image. The hologram may be an amplitude-only hologram, a phase-only hologram or a fully complex hologram.
[0071] In some embodiments, a phase-only hologram is calculated using an algorithm based on the Gerchberg-Saxton algorithm such as described in British patent 2,498,170 or 2,501,112 which are hereby incorporated in their entirety by reference. However, embodiments disclosed herein describe calculating a phase-only hologram by way of example only. In these embodiments, the Gerchberg-Saxton algorithm retrieves the phase information ψ [u, v] of the Fourier transform of the data set which gives rise to a known amplitude information T[x,y], wherein the amplitude information T[x,y] is representative of a target image (e.g. a photograph). Since the magnitude and phase are intrinsically combined in the Fourier transform, the transformed magnitude and phase contain useful information about the accuracy of the calculated data set. Thus, the algorithm may be used iteratively with feedback on both the amplitude and the phase information. However, in these embodiments, only the phase information L[u, v] is used as the hologram to form a holographic representative of the target image at an image plane. The hologram is a data set (e.g. 2D array) of phase values.
[0072] In other embodiments, an algorithm based on the Gerchberg-Saxton algorithm is used to calculate a fully-complex hologram. A fully-complex hologram is a hologram having a magnitude component and a phase component. The hologram is a data set (e.g. 2D array) comprising an array of complex data values wherein each complex data value comprises a magnitude component and a phase component.
[0073] In some embodiments, the algorithm processes complex data and the Fourier transforms are complex Fourier transforms. Complex data may be considered as comprising (i) a real component and an imaginary component or (ii) a magnitude component and a phase component. In some embodiments, the two components of the complex data are processed differently at various stages of the algorithm.
[0074]
[0075] First processing block 250 receives the starting complex data set and performs a complex Fourier transform to form a Fourier transformed complex data set. Second processing block 253 receives the Fourier transformed complex data set and outputs a hologram 280A. In some embodiments, the hologram 280A is a phase-only hologram. In these embodiments, second processing block 253 quantises each phase value and sets each amplitude value to unity in order to form hologram 280A. Each phase value is quantised in accordance with the phase-levels which may be represented on the pixels of the spatial light modulator which will be used to “display” the phase-only hologram. For example, if each pixel of the spatial light modulator provides 256 different phase levels, each phase value of the hologram is quantised into one phase level of the 256 possible phase levels. Hologram 280A is a phase-only Fourier hologram which is representative of an input image. In other embodiments, the hologram 280A is a fully complex hologram comprising an array of complex data values (each including an amplitude component and a phase component) derived from the received Fourier transformed complex data set. In some embodiments, second processing block 253 constrains each complex data value to one of a plurality of allowable complex modulation levels to form hologram 280A. The step of constraining may include setting each complex data value to the nearest allowable complex modulation level in the complex plane. It may be said that hologram 280A is representative of the input image in the spectral or Fourier or frequency domain. In some embodiments, the algorithm stops at this point.
[0076] However, in other embodiments, the algorithm continues as represented by the dotted arrow in
[0077] Third processing block 256 receives the modified complex data set from the second processing block 253 and performs an inverse Fourier transform to form an inverse Fourier transformed complex data set. It may be said that the inverse Fourier transformed complex data set is representative of the input image in the spatial domain.
[0078] Fourth processing block 259 receives the inverse Fourier transformed complex data set and extracts the distribution of magnitude values 211A and the distribution of phase values 213A. Optionally, the fourth processing block 259 assesses the distribution of magnitude values 211A. Specifically, the fourth processing block 259 may compare the distribution of magnitude values 211A of the inverse Fourier transformed complex data set with the input image 510 which is itself, of course, a distribution of magnitude values. If the difference between the distribution of magnitude values 211A and the input image 210 is sufficiently small, the fourth processing block 259 may determine that the hologram 280A is acceptable.
[0079] That is, if the difference between the distribution of magnitude values 211A and the input image 210 is sufficiently small, the fourth processing block 259 may determine that the hologram 280A is a sufficiently-accurate representative of the input image 210. In some embodiments, the distribution of phase values 213A of the inverse Fourier transformed complex data set is ignored for the purpose of the comparison. It will be appreciated that any number of different methods for comparing the distribution of magnitude values 211A and the input image 210 may be employed and the present disclosure is not limited to any particular method. In some embodiments, a mean square difference is calculated and if the mean square difference is less than a threshold value, the hologram 280A is deemed acceptable. If the fourth processing block 259 determines that the hologram 280A is not acceptable, a further iteration of the algorithm may be performed. However, this comparison step is not essential and in other embodiments, the number of iterations of the algorithm performed is predetermined or preset or user-defined.
[0080]
[0081] The complex data set formed by the data forming step 202B of
[0082]
R.sub.n+1[x,y]=F′{exp(iψ.sub.n[u,v])}
ψ.sub.n[u,v]=∠F{η.Math.exp(i∠R.sub.n[x,y])}
η=T[x,y]−α(|R[x,y]|−T[x,y])
[0083] where:
[0084] F′ is the inverse Fourier transform;
[0085] F is the forward Fourier transform;
[0086] R[x,y] is the complex data set output by the third processing block 256;
[0087] T[x,y] is the input or target image;
[0088] Z is the phase component;
[0089] L is the phase-only hologram 280B;
[0090] q is the new distribution of magnitude values 211B; and
[0091] α is the gain factor.
[0092] The gain factor α may be fixed or variable. In some embodiments, the gain factor α is determined based on the size and rate of the incoming target image data. In some embodiments, the gain factor α is dependent on the iteration number. In some embodiments, the gain factor α is solely function of the iteration number.
[0093] The embodiment of
[0094] In some embodiments, the Fourier transform is performed using the spatial light modulator. Specifically, the hologram data is combined with second data providing optical power. That is, the data written to the spatial light modulation comprises hologram data representing the object and lens data representative of a lens. When displayed on a spatial light modulator and illuminated with light, the lens data emulates a physical lens—that is, it brings light to a focus in the same way as the corresponding physical optic. The lens data therefore provides optical, or focusing, power. In these embodiments, the physical Fourier transform lens 120 of
[0095] In some embodiments, the Fourier transform is performed jointly by a physical Fourier transform lens and a software lens. That is, some optical power which contributes to the Fourier transform is provided by a software lens and the rest of the optical power which contributes to the Fourier transform is provided by a physical optic or optics.
[0096] In some embodiments, there is provided a real-time engine arranged to receive image data and calculate holograms in real-time using the algorithm. In some embodiments, the image data is a video comprising a sequence of image frames. In other embodiments, the holograms are pre-calculated, stored in computer memory and recalled as needed for display on a SLM. That is, in some embodiments, there is provided a repository of predetermined holograms.
[0097] Embodiments relate to Fourier holography and Gerchberg-Saxton type algorithms by way of example only. The present disclosure is equally applicable to Fresnel holography and Fresnel holograms which may be calculated by a similar method. The present disclosure is also applicable to holograms calculated by other techniques such as those based on point cloud methods.
[0098] Light Modulation
[0099] A spatial light modulator may be used to display the diffractive pattern including the computer-generated hologram. If the hologram is a phase-only hologram, a spatial light modulator which modulates phase is required. If the hologram is a fully-complex hologram, a spatial light modulator which modulates phase and amplitude may be used or a first spatial light modulator which modulates phase and a second spatial light modulator which modulates amplitude may be used.
[0100] In some embodiments, the light-modulating elements (i.e. the pixels) of the spatial light modulator are cells containing liquid crystal. That is, in some embodiments, the spatial light modulator is a liquid crystal device in which the optically-active component is the liquid crystal. Each liquid crystal cell is configured to selectively-provide a plurality of light modulation levels. That is, each liquid crystal cell is configured at any one time to operate at one light modulation level selected from a plurality of possible light modulation levels. Each liquid crystal cell is dynamically-reconfigurable to a different light modulation level from the plurality of light modulation levels. In some embodiments, the spatial light modulator is a reflective liquid crystal on silicon (LCOS) spatial light modulator but the present disclosure is not restricted to this type of spatial light modulator.
[0101] A LCOS device provides a dense array of light modulating elements, or pixels, within a small aperture (e.g. a few centimetres in width). The pixels are typically approximately 10 microns or less which results in a diffraction angle of a few degrees meaning that the optical system can be compact. It is easier to adequately illuminate the small aperture of a LCOS SLM than it is the larger aperture of other liquid crystal devices. An LCOS device is typically reflective which means that the circuitry which drives the pixels of a LCOS SLM can be buried under the reflective surface. The results in a higher aperture ratio. In other words, the pixels are closely packed meaning there is very little dead space between the pixels. This is advantageous because it reduces the optical noise in the replay field. A LCOS SLM uses a silicon backplane which has the advantage that the pixels are optically flat. This is particularly important for a phase modulating device.
[0102] A suitable LCOS SLM is described below, by way of example only, with reference to
[0103] Each of the square electrodes 301 defines, together with the overlying region of the transparent electrode 307 and the intervening liquid crystal material, a controllable phase-modulating element 308, often referred to as a pixel. The effective pixel area, or fill factor, is the percentage of the total pixel which is optically active, taking into account the space between pixels 301a. By control of the voltage applied to each electrode 301 with respect to the transparent electrode 307, the properties of the liquid crystal material of the respective phase modulating element may be varied, thereby to provide a variable delay to light incident thereon. The effect is to provide phase-only modulation to the wavefront, i.e. no amplitude effect occurs.
[0104] The described LCOS SLM outputs spatially modulated light in reflection. Reflective LCOS SLMs have the advantage that the signal lines, gate lines and transistors are below the mirrored surface, which results in high fill factors (typically greater than 90%) and high resolutions. Another advantage of using a reflective LCOS spatial light modulator is that the liquid crystal layer can be half the thickness than would be necessary if a transmissive device were used. This greatly improves the switching speed of the liquid crystal (a key advantage for the projection of moving video images). However, the teachings of the present disclosure may equally be implemented using a transmissive LCOS SLM.
[0105] Software Fresnel Lens Calculation
[0106] As described above, lens data representative of a lens may be written to the pixels of an SLM, wherein the lens data emulates a physical lens—that is, it brings light to a focus in the same way as the corresponding physical optic. Lens data representative of a Fresnel lens that is substantially centred on or around the centre of the pixelated SLM are calculated. The lens data is combined with holographic data and written to the pixels of the SLM. Data representative of a Fresnel lens is calculated using respective streams of integer square values corresponding to the coordinates in the x and y directions. The x component Fresnel lens values F(x) are proportional to integer square values x.sup.2, where the integer x is zero for the Fresnel lens value substantially at the centre of the Fresnel lens (i.e. at the origin or coordinate (0, 0)) and increases/decreases by one for each successive coordinate value in the x direction F(x). Similarly, the y component Fresnel lens values F(y) are proportional to integer square values y.sup.2, where the integer y is zero for the Fresnel lens value substantially at the centre of the Fresnel lens (i.e. at the origin or coordinate (0, 0)) and increases/decreases by one for each successive coordinate value in the y direction F(y). The Fresnel lens value F(x,y) written to a pixel value P(x,y) at the coordinate (x,y) of the pixel array is calculated as the sum of the x component Fresnel lens value F(x) and the y component Fresnel lens value F(y). Accordingly, the following description refers to “x coordinate Fresnel lens values” F(x), which are calculated based on integer square values x.sup.2 and “y coordinate Fresnel lens values” F(y), which are calculated based on integer square values y.sup.2.
[0107] X coordinate Fresnel lens values F(x) are determined using the following equation (1):
[0108] wherein f.sub.x is the focal length of the Fresnel lens in the x-direction, k is the wavelength of light, p.sub.x is the pixel size of the pixelated display device in the x-direction and x is the integer value of the x-coordinate.
[0109] Y coordinate Fresnel lens values F(y) are determined using the following equation (2):
[0110] wherein f.sub.y is the focal length of the Fresnel lens in the y-direction, k is the wavelength of light, p.sub.y is the pixel size of the pixelated display device in the y-direction and y is the integer value of the y-coordinate.
[0111] Thus, the x coordinate Fresnel lens values F(x) are determined as the product of integer square values x.sup.2 and the parameter
which is a constant for a given Fresnel lens, display device and wavelength channel. Similarly, the y coordinate Fresnel lens values F(y) are determined as the product of integer square values y.sup.2 and the parameter
which is a constant for a given Fresnel lens, display device and wavelength channel.
[0112] The x and y coordinate Fresnel lens values F(x), F(y) are calculated using respective data streams of integer square values. The combined Fresnel lens values F(x,y) are determined as the sum of the Fresnel lens values F(x), F(y) for the corresponding x and y coordinates. The lens data are written to the array of pixels of the SLM as a data stream of Fresnel lens values F(x,y), as described further below. In practice, the lens data is calculated by a logic circuit pipeline.
[0113] The generation of a data stream of integer square values by a logic circuit in a pipeline is a complex task. In particular, the integer square function requires two floating point multipliers for each of the x and y coordinates (i.e. a total of four multipliers).
to the x coordinate pipeline and the parameter
to the y coordinate pipeline, for multiplying the respective integer square values x.sup.2 and y.sup.2 to determine the corresponding x and y coordinate Fresnel lens values F(x), F(y) using equations (1) and (2) above.
[0114] In the x coordinate pipeline (i.e. x coordinate logic sub-circuit), x coordinate generator 402 generates a data stream of integer values x. In particular, x coordinate generator 402 outputs a sequence of numbers, wherein the received sequence of numbers is an arithmetic progression of numbers having a common difference of one. In embodiments, for a display comprising [m×n] pixels, the sequence of integers comprises x=−n/2, −n/2+1, −n/2+2, . . . n/2−1. Accordingly, the integer x is zero at or near the centre of the n pixels in the x direction (i.e. at the origin or coordinate (0, 0) corresponding to the centre of the Fresnel lens).
[0115] First floating point multiplier 404 receives the data stream of integer values x from x coordinate generator 402 and multiplies each integer value x with itself to generate a data stream of integer square values x.sup.2. Second floating point multiplier 406 receives the data stream of integer square values x.sup.2 from first floating point multiplier 404 and multiplies each integer square value x.sup.2 with the parameter
received from parameter register 410 to generate a stream of x coordinate Fresnel lens values F(x).
[0116] Similarly, in the y coordinate pipeline (i.e. y coordinate logic sub-circuit), y coordinate generator 412 generates a data stream of integer values y. In particular, y coordinate generator 412 outputs a sequence of numbers, wherein the received sequence of numbers is an arithmetic progression of numbers having a common difference of one. In embodiments, for a display comprising [m×n] pixels, the sequence of integers comprises y=−m/2, −m/2+1, −m/2+2, . . . m/2−1. Accordingly, the integer y is zero at or near the centre of the m pixels in the y direction (i.e. at the origin or coordinate (0, 0)). Third floating point multiplier 414 receives the data stream of integer values y from y coordinate generator 412 and multiplies each integer value y with itself to generate a data stream of integer square values y.sup.2. Fourth floating point multiplier 416 receives the data stream of integer square values y.sup.2 from third floating point multiplier 414 and multiplies each integer square value y.sup.2 with the parameter
received from parameter register 410 to generate a stream of y coordinate Fresnel lens values F(y).
[0117] In the final stage of the pipeline of logic circuit 400, adder 420 receives the data stream of x coordinate Fresnel lens values F(x) from second floating point multiplier 406 of the x coordinate pipeline and the data stream of y coordinate Fresnel lens values F(y) from fourth floating point multiplier 416 of the y coordinate pipeline, and adds the respective values to generate an output data stream of Fresnel lens values F(x,y) 450 for writing to the pixels P(x,y) at the corresponding coordinates (x,y). The output stream of Fresnel lens values F(x,y) is written to the pixels of the array in raster scan order.
[0118] As the skilled person will appreciate, the processing pipeline required for lens function calculation involves calculating integer square values for the coordinates in both the x and y directions separately. Thus, two separate logic stages are required to calculate the integer square values. In implementations having multiple wavelength channels, such as red, green and blue channels to achieve full colour displays, two logic stages are required for each wavelength channel, so that six logic stages for calculating integer square values are required. The calculation of a data stream of integer square values is complex and time consuming compared to other logic processes. Accordingly, it would be desirable to reduce the complexity and time taken for the calculation of lens data by a processing pipeline to due integer square calculation stages.
[0119] The inventor has recognised that it is possible to calculate integer square values without the requirement for multiplication.
[0120]
[0121] The inventor has further recognised that given an ordered input stream of integers in a sequence comprising an arithmetic progression of integer values with a difference or increment between consecutive integer values of one, such as {0, 1, 2, 3, 4, 5 . . . }, it is possible to calculate the corresponding ordered stream of integer square values {0, 1, 4, 9, 16, 25 . . . } more simply. This is possible because the integer square values in the sequence follow a pattern. In particular, for consecutive integer square values in the sequence, the value of the difference between one integer square value and the previous integer square value in the sequence always increases by two (+2). This is true for positive and negative integer values as illustrated by the following table:
TABLE-US-00001 TABLE 1 Integer value, Integer square value, Integer square difference value, X X.sup.2 (X.sup.2 − (X − 1).sup.2) −5 25 (25 − 36) = −11 −4 16 (16 − 25) = −9 −3 9 (9 − 16) = −7 −2 4 (4 − 9) = −5 −1 1 (1 − 4) = −3 0 0 (0 − 1) = −1 1 1 (1 − 0) = 1 2 4 (4 − 1) = 3 3 9 (9 − 4) = 5 4 16 (16 − 9) = 7
[0122]
[0123] Circuit stage 600 comprises first and second input registers 602, 612, first and second multiplexers 604, 614, first adder 620, second (+2) adder 617 and first and second data registers 606, 616. In this example, first input register 602, first multiplexer 604 and first data register 606 form a first register unit for providing a value A for the next calculation. Similarly, second input register 612, second multiplexer 614 and second data register 616 form a second register unit for providing a value B for the next calculation.
[0124] As the skilled person will appreciate from the following description, the stream of integer square values in the x direction used to calculate the x coordinate Fresnel lens value is the same for each row in the pixel array. Thus, a reset signal is used at the start of each row, to reinitiate generation of the sequence of integer square values in the x direction. The stream of integer square values in the y direction used to calculate the y coordinate Fresnel lens value is the same for each column in the pixel array. This means that the same integer square value in the y direction is used to calculate every y coordinate Fresnel lens value in the same row. This can be achieved using various techniques, as discussed below.
[0125] The following example illustrates the operation of circuit stage 600 for generating a stream of n integer square values x.sup.2 in the x direction used to calculate n x coordinate Fresnel lens values F(x) corresponding to n pixels in a row. In the example, the coordinates of the n pixels in a row are numbered from −n/2 to n/2−1.
[0126] In the presence of a reset signal, first multiplexer 604 selects and outputs a value A received at first (“1”) input from first input register 602 and second multiplexer 614 selects and outputs a value B received at its first (“1”) input from second input register 612. Input register 602 provides an initial integer square value A for the sequence on integer values, and input register 612 provides an initial difference value B for the sequence. The initial values A and B from input registers 602 and 612 are predetermined for calculating the first integer square value of the sequence. In particular, since the first x coordinate value in a row is −n/2 (also referred to herein as the “starting value for x”) the initial integer square value A received from first input register 602 is (−n/2−1).sup.2 and the initial difference value B received from second input register 612 is (−n/2).sup.2−(−n/2−1).sup.2 since (−n/2−1) corresponds to the previous value for x for the starting value −n/2. For example, in Table 1, there are 10 pixels in the x direction numbered from −5 to 4, and so the starting integer square value A is (−6).sup.2 (i.e. 36) and the starting difference value B is (−5).sup.2−(−6).sup.2 (i.e. −11). In the absence of a reset signal, first multiplexer 604 selects and outputs a value A received at its second (“0”) input from first data register 606, and second multiplexer 614 selects and outputs a value B received at its second (“0”) input from second data register 616. First and second data registers 606, 616 store respective values for A and B above, based on feedback from the previous (i.e. immediately preceding) calculation as described below.
[0127] First adder 620 receives the output values A and B from first and second multiplexers 604, 614 at its first and second inputs 608, 618, respectively. First adder 620 adds together the values A and B and outputs the current integer square value 650. The current integer square value 650 is also fed back and stored in first data register 606. Thus, the value stored in first register 606 corresponds to the previous integer square value A for the next calculation, which is selected and output by first multiplexer 604 in the absence of a reset signal. In addition, the output value B of second multiplexer 614 is fed back to second (+2) adder 617. Second adder 617 adds two (+2) to the received value B to generate a new difference value B which is stored in second data register 616. Thus, the value B stored in second register 616 corresponds to the difference value for the next calculation, which is selected and output by second multiplexer 614 in the absence of a reset signal.
[0128] Table 2 shows an example to illustrates the operation of the circuit stage 600 of
TABLE-US-00002 TABLE 2 X −5 −4 −3 −2 −1 0 1 2 3 4 A 36 25 16 9 4 1 0 1 4 9 B −11 −9 −7 −5 −3 −1 1 3 5 7 A + B 25 16 9 4 1 0 1 4 9 16
[0129] As can be seen from Table 2, the calculation A+B determines the integer square value X.sup.2, for both positive and negative integer values. As the skilled person will appreciate, Table 2 merely illustrates the calculation of a sequence of integer square values for the particular example coordinate system for n=10 and x is −5 to 4, suitable for calculating x coordinate Fresnel lens values as described herein. Many other examples are possible and contemplated by the present disclosure.
[0130] In operation, circuit stage 600 iteratively generates the sequence of x coordinate integer square values x.sup.2 for calculating the x coordinate Fresnel lens values F(x) for each row of an array of [m×n] pixels. Since pixel values comprising the Fresnel lens values are typically streamed to an array of pixels of a spatial light modulator in raster scan order (i.e. row by row), the reset signal is applied at the start of each row. The stream of x coordinate Fresnel lens values F(x) is illustrated in
[0131] Circuit stage 600 may also be used to generate a sequence of y coordinate integer square values y.sup.2 for calculating the y coordinate Fresnel lens values F(y) for an array of [m×n] pixels. As mentioned above, since the Fresnel lens values are streamed to the pixel array row by row, the same y coordinate integer square value y.sup.2 is used for calculating the y coordinate Fresnel lens value F(y) of each pixel in a row. Thus, in embodiments, circuit stage 600 may output each y coordinate integer square value y.sup.2 calculated by first adder 620 n times before beginning the next iteration to calculate the next integer square value (y+1).sup.2. In other embodiments, each integer square value y.sup.2 output by the first adder 620 may be provided to a buffer (not shown) that buffers the integer square value y.sup.2. The integer square value y.sup.2 stored in the buffer may then be read out n times for calculating the stream of y coordinate Fourier lens values F(y). In still further embodiments, the buffer may be provided after the calculation of the y coordinate Fourier lens values F(y). The stream of y coordinate Fourier lens values F(y) is illustrated in
[0132] Accordingly, circuit stage 600 generates an output sequence of integer square values, in response to an input sequence comprising an arithmetic progression of integers having a common difference of one that may comprise negative, zero and positive integers, in a simple, less complex and more efficient logic process, which avoids the need for a multiplication function using floating point multipliers.
[0133] The circuit stage 600 of
[0134]
[0135] In particular, logic circuit 700 comprises an x coordinate pipeline comprising first integer square circuit stage 702 for generating a stream of integer square values x.sup.2 corresponding to the x coordinates of pixels in the x direction and first floating point multiplier 706. Logic circuit 700 further comprises a y coordinate pipeline comprising second integer square circuit stage 712 for generating a stream of integer square values y.sup.2 corresponding to the y coordinates of pixels in the y direction and second floating point multiplier 716.
[0136] A parameter register 710 provides the parameter
to first floating point multiplier 706 and the parameter
to second floating point multiplier. First floating point multiplier 706 multiplies the integer square values x.sup.2 received from first integer square circuit stage 702 with the parameter received from parameter register 710 and outputs the corresponding stream of x coordinate Fresnel lens values F(x) in accordance with equation (1) above. Similarly, second floating point multiplier 716 multiplies the integer square values y.sup.2 received from second integer square circuit stage 712 with the parameter received from parameter register 710 and outputs the corresponding stream of y coordinate Fresnel lens values F(y) in accordance with using equation (2) above.
[0137] In the final stage of the pipeline of logic circuit 700, adder 720 receives the data stream of x coordinate Fresnel lens values F(x) from first floating point multiplier 706 of the x coordinate pipeline and the data stream of y coordinate Fresnel lens values F(y) from second floating point multiplier 716 of the y coordinate pipeline. The x coordinate pipeline and the y coordinate pipeline are synchronized so that the corresponding x and y coordinate Fresnel lens values are output to adder 720 at substantially the same time. Adder 720 adds together the respective values to generate an output data stream of combined Fresnel lens values F(x,y) 750 for combining with hologram pixel values and writing to the pixels P(x,y) at the corresponding coordinates (x,y) as described above.
[0138] Accordingly, the logic circuit 700 of
[0139] The logic circuit 700 of
[0140] Writing Fresnel Lens Values of a Pixel Array of a Display Device
[0141] As described herein, a data stream of Fresnel lens values F(x,y) are written as lens data for display on [m×n] pixels of a pixelated display device (e.g. SLM) as described herein. A coordinate system of the pixels P(x,y) of the [m×n] pixel array is defined, which centres the Fresnel lens at around the centre of the pixel array. In particular, in an embodiment, the columns of pixels in the [m×n] pixel array are numbered with coordinates from −n/2 to n/2−1 corresponding to the pixel coordinates in the x direction, and the rows of pixels are numbered with coordinates from −m/2 to (m/2−1) corresponding to the pixel coordinates in the y direction. The Fresnel lens value F(0, 0) corresponding to the centre of the Fresnel lens (i.e. based on integer/integer square value 0 in the x and y directions) is written to the pixel P(0, 0) at the origin or coordinate (0, 0).
[0142]
[0143] In embodiments, the x coordinate values of the pixels of the pixel array are used as the stream of integer values x that are input to the logic circuit 700 of the embodiment of
of unity (one). This corresponds to the stream of integer square values x.sup.2 (for x=−5 to 4) generated and output by the circuit stage 600 of the embodiment of
[0144] Similarly, in embodiments, the y coordinate values of the pixels of the pixel array are used as the stream of integer values y that are input to the logic circuit 700 of
of unity (one). This corresponds to the stream of integer square values y.sup.2 (for y=−5 to 4) generated and output by the circuit stage 600 of the embodiment of
[0145] In practice, a stream of Fresnel lens values F(x,y) is provided for writing to the pixels to provide the software lens function, wherein each Fresnel lens value is derived from the sum of each x coordinate Fresnel lens value F(x) and the corresponding y coordinate Fresnel lens value F(y).
[0146] As the skilled person will appreciate, in practice the lens data comprising the Fresnel lens values F(x,y) are combined with hologram data as described above. Thus, the display value written to each pixel P(x,y) of the SLM pixel array at coordinate (x,y) comprises the sum of the Fresnel lens value F(x,y) as described herein and the corresponding hologram pixel value.
[0147] Multi-Threaded Integer Squares Calculation
[0148] In the above embodiments, the Fresnel lens values F(x,y) for the pixels P(x,y) of an SLM pixel array are calculated from a single x coordinate pipeline that generates a single stream of x coordinate Fresnel lens value F(x) and a single y coordinate pipeline that generates a single stream of y coordinate Fresnel lens value F(y). However, in practice, the pixel array has a very large number of pixels P(x,y). In consequence, the calculation of a single stream of x coordinate integer square values x.sup.2 and a corresponding single stream of x coordinate Fresnel lens values F(x) for the SLM pixel array may not be sufficiently fast for video rate streaming. As the skilled person will appreciate, this problem is less significant when calculating the y coordinate Fresnel lens values F(y) since each calculated value is repeated n times in the output stream, thereby allowing more time to calculate the next value. Accordingly, in alternative embodiments, the calculation of the x coordinate integer square values x.sup.2 is performed by multiple logic sub-circuits operating in parallel, and thus in multiple threads k, where k>1, so as to more quickly generate the x coordinate Fresnel lens values F(x).
[0149] In particular, the data stream of x coordinate Fresnel lens values F(x) is generated by k logic sub-circuits operating concurrently (e.g. arranged in parallel). Each of the k logic sub-circuits generates a corresponding data stream, Sk, comprising a subset of x coordinate Fresnel lens values F(x) of the complete data stream. The k data streams, Sk, of x coordinate Fresnel lens values F(x) may be combined to form a single stream, S, of x coordinate Fresnel lens values F(x).
[0150]
[0151] Each logic sub-circuit or thread of the k threads receives an input sequence of x coordinate integer values comprising a different subset of the x coordinate integer values of the [m x n] pixel array. The first thread receives an input sequence comprising an arithmetic progression of x coordinate integers with a starting value a corresponding to the x coordinate of first pixel x.sub.1 (i.e. −24) and an increment between integers of k. Each consecutive thread of the k threads shown in
[0152] The output data streams of integer square values x.sup.2 of the k threads shown in
[0153] The calculations performed by the first thread in
TABLE-US-00003 TABLE 3 X −24 −16 −8 0 8 16 A 625 576 256 64 0 64 B −448 −320 −192 −64 64 192 A + B 576 256 64 0 64 256
[0154] Thus, each logic sub-circuit and corresponding thread performs just n/k (i.e. 48/8=6) iterations of the integer square function (calculations of A+B). Since the k threads process their respective subsets of x coordinate integer values at substantially the same time (e.g. in parallel), the time required to generate the complete data stream of x coordinate integer square values is reduced by a factor of k (i.e. 8). This significantly reduces the processing time to generate the complete stream of x coordinate integer square values x.sup.2 needed to generate the x coordinate Fresnel lens values F(x) for all of the [m×n] pixels of the pixel array.
[0155] In embodiments, the calculation of the x coordinate Fresnel lens values F(x) is performed by the k logic sub-circuits as part of the k processing threads. Thus, the calculation of the x coordinate Fresnel lens values F(x) from each subset of x coordinate integer square values x.sup.2 is performed at substantially the same time (e.g. in parallel), and each of the k logic sub-circuit concurrently outputs a corresponding stream, Sk, of Fresnel lens values F(x).
[0156] As the skilled person will appreciate, for embodiments that generate the integer square values in a single thread (i.e. k=1) only a single circuit stage 600 is required to calculate the integer square values for the x coordinate pipeline. However, the processing time for calculating the integer square values x.sup.2, and thus the Fresnel lens values F(x), is proportional to the number of pixel values in the x direction. In the case of large SLM pixel arrays, this processing time may be too long to allow for calculation of Fresnel lens values F(x) at high frame speeds (e.g. video rate). In contrast, embodiments that perform multi-threading (i.e. k>1) can significantly reduce the processing time so as to achieve video rate frame speeds. However, such embodiments require multiple logic sub-circuits 600, one for each thread in the x coordinate processing pipeline. This requirement for additional logic sub-circuits 600 is generally not unduly burdensome in terms of cost, design effort and consumption of die area, particularly when implemented in a PLD (e.g. FPGA) or structured ASIC, in which the duplication of logic stages is a straightforward design programming task.
[0157] Additional Features
[0158] Embodiments refer to an electrically-activated LCOS spatial light modulator by way of example only. The teachings of the present disclosure may equally be implemented on any spatial light modulator capable of displaying a computer-generated hologram in accordance with the present disclosure such as any electrically-activated SLMs, optically-activated SLM, digital micromirror device or microelectromechanical device, for example.
[0159] In some embodiments, the light source is a laser such as a laser diode. In some embodiments, the detector is a photodetector such as a photodiode. In some embodiments, the light receiving surface is a diffuser surface or screen such as a diffuser. The holographic projection system of the present disclosure may be used to provide an improved head-up display (HUD) or head-mounted display. In some embodiments, there is provided a vehicle comprising the holographic projection system installed in the vehicle to provide a HUD. The vehicle may be an automotive vehicle such as a car, truck, van, lorry, motorcycle, train, airplane, boat, or ship.
[0160] The quality of the holographic reconstruction may be affect by the so-called zero order problem which is a consequence of the diffractive nature of using a pixelated spatial light modulator. Such zero-order light can be regarded as “noise” and includes for example specularly reflected light, and other unwanted light from the SLM.
[0161] In the example of Fourier holography, this “noise” is focused at the focal point of the Fourier lens leading to a bright spot at the centre of the holographic reconstruction. The zero order light may be simply blocked out however this would mean replacing the bright spot with a dark spot. Some embodiments include an angularly selective filter to remove only the collimated rays of the zero order. Embodiments also include the method of managing the zero-order described in European patent 2,030,072, which is hereby incorporated in its entirety by reference.
[0162] In some embodiments, the size (number of pixels in each direction) of the hologram is equal to the size of the spatial light modulator so that the hologram fills the spatial light modulator. That is, the hologram uses all the pixels of the spatial light modulator. In other embodiments, the hologram is smaller than the spatial light modulator. More specifically, the number of hologram pixels is less than the number of light-modulating pixels available on the spatial light modulator. In this case, the Fresnel lens values F(x,y) are calculated using the integer values/coordinates of [m×n] pixels corresponding to the hologram pixels. In some of these other embodiments, part of the hologram (that is, a continuous subset of the pixels of the hologram) is repeated in the unused pixels. Likewise, since the Fresnel lens values F(x,y) are combined with the hologram pixel values, a corresponding part of the lens data is repeated in the unused pixels. This technique may be referred to as “tiling” wherein the surface area of the spatial light modulator is divided up into a number of “tiles”, each of which represents at least a subset of the hologram and corresponding lens data. Each tile is therefore of a smaller size than the spatial light modulator. In some embodiments, the technique of “tiling” is implemented to increase image quality. Specifically, some embodiments implement the technique of tiling to minimise the size of the image pixels whilst maximising the amount of signal content going into the holographic reconstruction. In some embodiments, the holographic pattern written to the spatial light modulator comprises at least one whole tile (that is, the complete hologram) and at least one fraction of a tile (that is, a continuous subset of pixels of the hologram).
[0163] In embodiments, only the primary replay field is utilised and system comprises physical blocks, such as baffles, arranged to restrict the propagation of the higher order replay fields through the system.
[0164] In embodiments, the holographic reconstruction is colour. In some embodiments, an approach known as spatially-separated colours, “SSC”, is used to provide colour holographic reconstruction. In other embodiments, an approach known as frame sequential colour, “FSC”, is used.
[0165] The method of SSC uses three spatially-separated arrays of light-modulating pixels for the three single-colour holograms. An advantage of the SSC method is that the image can be very bright because all three holographic reconstructions may be formed at the same time. However, if due to space limitations, the three spatially-separated arrays of light-modulating pixels are provided on a common SLM, the quality of each single-colour image is sub-optimal because only a subset of the available light-modulating pixels is used for each colour. Accordingly, a relatively low-resolution colour image is provided.
[0166] The method of FSC can use all pixels of a common spatial light modulator to display the three single-colour holograms in sequence. The single-colour reconstructions are cycled (e.g. red, green, blue, red, green, blue, etc.) fast enough such that a human viewer perceives a polychromatic image from integration of the three single-colour images. An advantage of FSC is that the whole SLM is used for each colour. This means that the quality of the three colour images produced is optimal because all pixels of the SLM are used for each of the colour images. However, a disadvantage of the FSC method is that the brightness of the composite colour image is lower than with the SSC method—by a factor of about 3—because each single-colour illumination event can only occur for one third of the frame time. This drawback could potentially be addressed by overdriving the lasers, or by using more powerful lasers, but this requires more power resulting in higher costs and an increase in the size of the system.
[0167] Examples describe illuminating the SLM with visible light but the skilled person will understand that the light sources and SLM may equally be used to direct infrared or ultraviolet light, for example, as disclosed herein. For example, the skilled person will be aware of techniques for converting infrared and ultraviolet light into visible light for the purpose of providing the information to a user. For example, the present disclosure extends to using phosphors and/or quantum dot technology for this purpose.
[0168] Some embodiments describe 2D holographic reconstructions by way of example only. In other embodiments, the holographic reconstruction is a 3D holographic reconstruction. That is, in some embodiments, each computer-generated hologram forms a 3D holographic reconstruction.
[0169] The methods and processes described herein may be embodied on a computer-readable medium. The term “computer-readable medium” includes a medium arranged to store data temporarily or permanently such as random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. The term “computer-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions for execution by a machine such that the instructions, when executed by one or more processors, cause the machine to perform any one or more of the methodologies described herein, in whole or in part.
[0170] The term “computer-readable medium” also encompasses cloud-based storage systems. The term “computer-readable medium” includes, but is not limited to, one or more tangible and non-transitory data repositories (e.g., data volumes) in the example form of a solid-state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof. In some example embodiments, the instructions for execution may be communicated by a carrier medium. Examples of such a carrier medium include a transient medium (e.g., a propagating signal that communicates instructions).
[0171] It will be apparent to those skilled in the art that various modifications and variations can be made without departing from the scope of the appended claims. The present disclosure covers all modifications and variations within the scope of the appended claims and their equivalents.