HOLOGRAPHIC DISPLAY SYSTEM AND METHOD
20230143728 · 2023-05-11
Inventors
- Alfred James NEWMAN (London, GB)
- Thomas James DURRANT (London, GB)
- Andrzej KACZOROWSKI (London, GB)
- Darran Francis MILNE (London, GB)
Cpc classification
G03H1/2294
PHYSICS
G02B3/0043
PHYSICS
G02B3/0068
PHYSICS
G03H1/02
PHYSICS
International classification
Abstract
A holographic display comprises: an illumination source which is at least partially coherent; a plurality of display elements positioned to receive light from the illumination source and spaced apart from each other, each display element comprising a group of at least two sub-elements; and a modulation system associated with each display element and configured to modulate at least a phase of each of the plurality of sub-elements.
Claims
1. A holographic display comprising: an illumination source which is at least partially coherent; a plurality of display elements positioned to receive light from the illumination source and spaced apart from each other, each display element comprising a group of at least two sub-elements; and a modulation system associated with each display element and configured to modulate at least a phase of each of the plurality of sub-elements.
2. A holographic display according to claim 1, further comprising an optical system configured to generate the plurality of display elements by reducing the size of the group of sub-elements within each display element such that the group of sub-elements are spaced closer to each other than they are to sub-elements of an immediately adjacent display element.
3. A holographic display according to claim 2, wherein the optical system comprises an array of optical elements.
4. A holographic display according to claim 2, wherein the optical system has different magnifications in first and second dimensions, and a first magnification in the first dimension is less than a second magnification in second dimension.
5. A holographic display according to claim 4, wherein the first dimension is substantially horizontal in use, and wherein the second dimension is perpendicular to the first dimension.
6. A holographic display according to claim 4, wherein the optical system comprises an array of optical elements, each optical element comprising first and second lens surfaces, at least one of the first and second lens surfaces having a different radius of curvature in a first plane, defined by the first dimension and a third dimension, than in the second plane, defined by the second dimension and the third dimension
7. A holographic display according to claim 6, wherein: the first and second lens surfaces are associated with first and second focal lengths respectively in the first plane, and the first magnification is defined by the ratio of first and second focal lengths; and the first and second lens surfaces are associated with third and fourth focal lengths respectively in the second plane, and the second magnification is defined by the ratio of third and fourth focal lengths.
8. A holographic display according to claim 2, wherein the optical system comprises an array of optical elements each comprising: a first lens surface configured to receive light having a first wavelength and light having a second wavelength, different from the first wavelength; and a second lens surface in an optical path with the first lens surface; wherein the first lens surface comprises a first surface portion optically adapted for the first wavelength and a second surface portion optically adapted for the second wavelength.
9. A holographic display according to claim 8, wherein the first surface portion is optically adapted for the first wavelength by having a first radius of curvature and the second surface portion is optically adapted for the second wavelength by having a second radius of curvature.
10. A holographic display according to claim 8, wherein the first lens surface has a first focal point for light having the first wavelength and the second lens surface has a second focal point for light having the first wavelength and the first and second focal points are coincident.
11. A holographic display according to claim 2, wherein: the optical system is configured to converge light passing through the optical system towards a viewing position; the optical system comprises an array of optical elements, each optical element comprising a first lens surface with a first optical axis and a second lens surface with a second optical axis; and the first optical axis is offset from the second optical axis.
12. A holographic display according to claim 11, wherein an optical element positioned closer to an edge of the display has an offset that is greater than an offset for an optical element positioned closer to a center of the display.
13. A holographic display according to claim 12, wherein each optical element comprises a first lens surface and a second lens surface spaced apart from the first lens surface along an optical path through the optical element, and wherein the first lens surfaces are spaced apart along the array at a first pitch and the second lens surfaces are spaced along the array at a second pitch, the second pitch being smaller than the first pitch.
14. A holographic display according to claim 1, wherein each display element consists of a two-dimensional group of sub-elements having dimensions n by m, where n and m are integers, and wherein one of: n is equal to 2, m is equal to 1 and the modulation system is configured to modulate a phase and an amplitude of each sub-element; and n is equal to 2, m is equal to 2 and the modulation system is configured to modulate a phase of each sub-element.
15. A holographic display according to claim 1, comprising a convergence system arranged to direct an output of the holographic display towards a viewing position.
16. A holographic display according to claim 1, comprising a mask configured to limit a size of the sub-elements.
17. An apparatus comprising: a holographic display according to any preceding claim; and a controller for controlling the modulation system such that each display element has a first amplitude and phase when viewed from a first position and a second amplitude and phase when viewed from a second position.
18. An apparatus according to claim 17, further comprising an eye-locating system configured to determine the first position and the second position.
19. A method of displaying a computer-generated hologram, the method comprising: controlling a phase of a plurality of groups of sub-elements such that the output of sub-elements within each group combines to produce a respective first amplitude and a first phase at a first viewing position and a respective second amplitude and a second phase at a second viewing position.
20. A method according to claim 19, further comprising: determining the first viewing position and the second viewing position based on input received from an eye-locating system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
DETAILED DESCRIPTION
[0068] SLM-based displays are normally used to calculate a complex electric field somewhere in the region of a viewer's pupil. However, the complex electrical field can be calculated for any plane, such as in a screen plane. Away from the pupil plane, most of the image information is in amplitude rather than phase, but control of phase is still required to keep defocus. This is shown diagrammatically in
[0069] Assuming that the field at each plane is sampled on a grid of points, each of those points can be considered as a point source with a given phase and amplitude. Taking the pupil plane 102 as the limiting aperture, the total number of points needed to describe the field is independent of the location of the plane. For a square pupil plane of width w, a field of view of horizontal angle θ.sub.x and vertical angle θ.sub.y can be displayed by sampling with a grid of points having approximate dimensions of wθ.sub.x/λ by wθ.sub.y/λ.
[0070] If the viewer's eye position is known, for example by tracking the position of a user's eye or positioning the screen at a known position relative to the eye, a CGH can be calculated which displays correctly at the pupil plane providing that sufficient point sources are available to generate the image. Eye-tracking could be managed in any suitable way, for example by using a camera system, such as might be used for biometric face recognition, to track a position of a user's eye. The camera system could, for example, use structured light, multiple cameras, or time of flight measurement to return depth information and locate a viewer's eye in 3D space and hence determine the location of the pupil plane.
[0071] In this way, a binocular display could be made by ensuring that the pupil plane is sufficiently large to include both a viewer's pupils. Rather than the two displays of a binocular headset, a single display can be used for binocular viewing, with each eye perceiving a different image. Manufacturing such a binocular display is challenging because, for a typical field of view, the number of point emitters required to give a pupil plane large enough to include both of a viewer's eyes is extremely large (of the order of billions of point sources).
[0072] CGH displays can display information by time division multiplexing Red, Green and Blue components and using persistence of vision so that these are perceived as a combined colour image by a viewer. From the discussion above, the number points required for a given size of the pupil plane in such a system will vary for each of the red, green and blue images because of the different wavelengths (the presence of λ in the equations wθ.sub.x/λ by wθ.sub.y/λ). It is useful to have the same number of points for each colour. In that case, setting the green wavelength to the desired pupil plane size sets the mid-point, with the red and blue image planes then being slightly larger and slightly smaller than the green image plane, respectively.
[0073] For a single eye display, a pupil plane might be 10 mm by 10 mm, so that there is some room for movement of the eye within that plane. This could allow for some inaccuracy in the positioning of the eye. A typical green wavelength used in displays is 520 nm and a field of view might be 0.48 by 0.3 radians, which is similar to viewing a 16:10, 33 cm (13 inch) display at a distance of 60 cm. The resulting grid would then be (10 mm×0.48)/520 nm=9,230 points wide by (10 mm×0.3)/520 nm=5769 points high. The total number of point emitters required is therefore around 53 million. Scaling to larger displays having a pupil plane sufficient to cover both eyes requires a significantly larger number of point emitters: a pupil plane of 50 mm×100 mm would require around 2.7 billion point emitters. While the number of point emitters can be reduced by limiting the field of view, the resulting hologram viewed then becomes very small.
[0074] It would be useful to be able to be able to display a binocular hologram with a smaller number of point emitters.
[0075] As will be described in more detail below, embodiments control display elements that comprise groups of sub-elements within a display so that the display element is perceived as a point source with different amplitude and phase from different viewing positions. The groups of sub-elements are small within the image plane of the display element with a larger spacing between display elements. The result is a sparsely populated image plane with point sources spaced apart from each other by the overall spacing between the display elements. Providing that each display element has at least four degrees of freedom (the number of phase and/or amplitude variables that can be controlled) then a single display can, in effect, be driven to create two smaller pupil planes directed towards the eyes of a viewer. As the group of sub-elements and/or the degrees of freedom increase, it also becomes possible to support multiple viewers of the same display. For example, an eight degree of freedom display could produce four directed image planes and thus support two viewers (four eyes).
[0076] One way to produce display elements used in examples is to reimage an array of substantially equally spaced sub-elements to form the display elements. The reimaging of groups of sub-elements to a smaller size is shown diagrammatically in
[0077] Array 202 is reimaged as array 206 of display elements comprising groups 208 of sub-elements of reduced size but at the same spacing between the centres of the groups as in the original array 202. Put another way, in the re-imaged array 206 comprises sparse clusters of pixels where the pitch between clusters is wider than the original pitch, but the pitch between re-imaged pixels in a cluster is smaller than the original pitch. Through this reimaging, it is possible to obtain the benefits of a wider effective field of view without increasing the overall pixel count because individual sub-elements within the display element can be controlled to appear as a point emitter with different amplitude and phase when viewed from different positions.
[0078] Example constructions of a display in which groups of pixels are reimaged as sparsely populated point sources within a wider image field will now be described.
[0079] The coherent illumination source 310 can have any suitable form. In this example it is a pupil-replicating holographic optical element (HOE) used in holographic waveguides. The coherent illumination source 310 is controlled to emit Red, Green or Blue light using time division multiplexing. Other examples may use other backlights to provide at least partially coherent light.
[0080] The example of
[0081] Amplitude-modulating element 312 and phase-modulating element 314 are both Liquid Crystal Display (LCD) layers which are stacked and aligned so that their constituent elements are in a same optical direction. Each consists of a backplane with transparent electrodes matching the underlying pixel pattern, a ground plane, and one or more waveplate/polarising films. Amplitude-modulating LCDs are well known, and a phase modulating LCD can be manufactured by altering the polarisation elements. One example of how to manufacture a phase modulating LCD is discussed in the paper “Phase-only modulation with a twisted nematic liquid crystal display by means of equi-azimuth polarization states”, V. Duran, J. Lancis, E. Tajahuerce and M. Fernandez-Alonso, Optics Express, Vol. 14, No. 12, pp 5607-5616, 12 Jun. 2006.
[0082] Optical system 316 is a microlens layer in this embodiment. Microlens arrays can be manufactured by a lithographic process to create a stamp and are known for other purposes, such as to provide a greater effective fill-factor on digital image sensors. Here the microlens array comprises a pair of positive lenses for each group of sub-elements to be re-imaged. The focal length of these lenses is f.sub.1 and f.sub.2, respectively, producing a reduction in size by a factor of f.sub.1/f.sub.2. The reduction in size is 10× in this example, other reduction factors can be used in other examples. To provide the required spacing between display elements, each microlens has an optical axis passing through a geometrical centre of the group of sub-elements. One such optical axis 318 is depicted as a dashed line in
[0083] Other examples may use alternative optical systems than a microlens array. This could include diffraction gratings to achieve the desired focusing or a blocking mask, such as a blocking mask with a small diameter aperture positioned at each corner of a display element. A blocking mask may be easier to manufacture than a microlens array, but a blocking mask will have lower efficiency because much of the coherent illumination source is blocked.
[0084] Also visible in
[0085] The schematic depiction in
[0086]
[0087] In examples where the screen is large compared to the expected viewing area then each group of imaging elements may have a fixed additional phase gradient to direct the emission cone of a group of imaging elements towards the nominal viewing area. The phase gradient can be provided by including an additional wedge profile on each microlens in the optical system 316, similar to a Fresnel lens, or by including a spherical term, also referred to as a spherical phase profile, on the coherent illumination source 310 that verges light to the nominal viewing position. A spherical term imparts a phase delay which is proportional to the square of the radius from the centre of the screen, the same type of phase profile provided by a spherical lens. For displays where the expected viewing area is large compared to the screen size the emission cone of each group of imaging elements may be sufficiently large that an element imparting an additional phase gradient is not required.
[0088] Some examples may include an additional non-coherent illumination source, such as a Light Emitting Diode (LED) which can be operated as a conventional screen in conjunction with the amplitude modulating element. In such examples, the display may function as both a conventional, non-holographic display and a holographic display.
[0089] Another example display construction is depicted in
[0090] In use, the display of
[0091] The display of
[0092] In use, the processing system 522 receives input image data via the input 524 and eye position data from the eye tracking system 526. Using the input image data and the eye position data, the processing system calculates the required modulation of the phase modulation element (and the amplitude modulation element, if present) to create an image field representing the image at the determined pupil planes positioned at the viewer's eyes.
[0093] Operation of the display to provide different phase and amplitudes to two different viewing positions will now be described. For clarity, the case of a 2×1 group of sub-elements, where each sub-element can be modulated in amplitude and phase will be described. This provides four degrees of freedom (two phase and two amplitude variables) to enable the group of sub-elements to be viewed with a first phase and amplitude from a first position and a second phase and amplitude from a second position.
[0094] As explained above with reference to
[0095] Each sub-element, or emission area, 601, 602 has an associated complex amplitude U.sub.1 and U.sub.2. The amplitude and phase of each is controlled to produce a point a display element which appears as a point source with a first phase and amplitude when viewed from a first position of a pupil plane, and simultaneously as a point source with a second phase and amplitude when viewed from a second position of a pupil plane, the first and second positions of pupil plane corresponding to the determined positions of a viewer's eyes. The pitch between the reduced size sub-elements output from the optical system is 2a, measured from the centre line of the overall image, 612 to the centre of the imaging elements 601, 602. The dimension a is illustrated by arrows 604 in
[0096] Together, these dimensions a, b, c and d control the properties of the display as follows. The pitch of the emission areas, 2a (depicted by arrows 604) controls how rapidly the apparent value of the group can change with viewing position. For this example, the subtended angle between maximum and minimum possible apparent intensity is λ/4a, and so the display operates most effectively when the inter-pupillary distance (IPD) of the viewer subtends an angle of λ/4a, i.e. at a distance z=IPD.4a/λ. The efficiency with which content can be displayed reduces away from this position. At 0.5z it is no longer possible to display different scenes to each eye. Thus, values of a might be different for a relatively close display, such as might be used in a headset, than for a display intended to be viewed further away, such as might be useful for a portable computing device.
[0097] The pitch of the group, b (depicted by arrows 606), determines the angular size of the pupil, the angular size of the pupil being given by k/b. Thus a lower value of b increases pupil size, but requires a greater number of display elements to achieve the same field of view.
[0098] The dimensions of the emission areas, c and d (depicted by arrows 608 and 610, respectively), determine the emission cone of the group of pixels, with nulls at angles θ.sub.x=λ/c and θ.sub.y=λ/d. Image quality reduces as these nulls are approached, so maintaining acceptable image quality requires operating in a reduced area, maintaining sufficient distance from the nulls that image quality remains acceptable. Reducing c and d, so that the group of pixels is further reduced in size increases the emission cone angle of the group, but at the cost of reduced optical efficiency.
[0099] The interaction of these constraints on the viewable image is depicted in
[0100] From this discussion, the benefit of the mask 320, included in some examples, can also be understood. The distance between sub-element centres is determined by the IPD and viewing distance, z, from the equations IPD/z=θ_IPD=λ/4a. Without a mask 320, c=2a, so θ.sub.x=2×θ_IPD, giving an addressable viewing width which is 2×IPD. To make the addressable viewing width wider, it is necessary to have c<2a, which can be provided by using a mask 320 to further reduce the size of the sub-elements.
[0101] In use, the group of sub-elements is controlled according to the principles depicted in
[0102] Solutions to these equations may be calculated analytically, by considering Maxwell's equations which are linear (electric fields are superposable) together with known models of how light propagates from an imaging element of the aperture of the imaging elements, such as Fraunhofer or Fresnel diffraction equations. In other examples, the equations may be solved numerically, for example using iterative methods.
[0103] While this example has discussed the control of amplitude and phase of a 2×1 group of sub-elements, the required four degrees of freedom can also be provided by a 2×2 group of sub-elements which are modulated by phase only.
[0104] While this example has discussed control in which amplitude and phase are independent (in other words, there are two degrees of freedom for each sub-element), other examples may control phase and amplitude with one degree of freedom, without necessarily holding either phase or amplitude constant. For example, the phase and amplitude may plot a line in the Argand diagram of possible values of U.sub.1 and U.sub.2, with the one degree of freedom defining the position on that line. In that case, the required four degrees of freedom may be provided by a 2×2 group of sub-elements.
[0105] An overall method of controlling the display is depicted in
[0106] In some examples, blocks 1102 and 1104 may be carried out by a processor of the display. In other examples, blocks 1102 and 1104 may be carried out elsewhere, for example by a processing system of an attached computing system.
[0107]
[0108]
[0109] With reference to the overall geometry of
[0110] As shown, the first lens surface 1228 has a first curvature (defined by a first radius of curvature) in this first plane and the second lens surface 1230 has a second curvature (defined by a second radius of curvature) in the first plane. In this example, the first and second curvatures are different, which results in different focal lengths for each lens surface. The first lens surface 1228 has a first focal length f.sub.x1 in the first plane and the second lens surface 1230 has a second focal length f.sub.x2 in the first plane.
[0111] The magnification, M.sub.1, along the first axis/dimension 1220 (referred to as a “first magnification”) is given by the ratio of the first focal length to the second focal length, so M.sub.1=f.sub.x1/f.sub.x2. Controlling the first radius of curvature, the second radius of curvature and therefore the first and second focal lengths in the first plane therefore controls the magnification in the first dimension.
[0112]
[0113] The magnification, M.sub.2, along the second axis/dimension 1222 (referred to as a “second magnification”) is given by the ratio of the third focal length to the fourth focal length, so M.sub.2=f.sub.y1/f.sub.y2. Controlling the third radius of curvature, the fourth radius of curvature and therefore the third and fourth focal lengths in the second plane therefore controls the magnification in the second dimension.
[0114] Generally, the magnification in the first dimension is constrained based on the angle subtended between the pupils of an observer, and therefore the inter-pupillary distance (IPD), as shown in
[0115] In contrast, the magnification along the second axis/dimension 1222 is not constrained by the inter-pupillary distance (IPD), so may be different to the magnification along the first axis 1220. Accordingly, the magnification along the second axis 1222 can be increased to provide an increased range of viewing positions along the second axis 1222. The second magnification therefore controls the vertical viewing angle depicted by angle 710 in
[0116] The following discussion sets example limits on the first and second magnifications. As discussed above, the following derivation assumes that the eyes of an observer are horizontal along the first axis 1220 (x-axis).
[0117] It is desirable for the separation of the centres (measured along the first axis) of the reimaged sub-pixels to be such that it is possible for light from the two subpixels to interfere predominantly constructively at one eye and destructively at the other eye.
[0118] Accordingly, x.sub.reimaged=x.sub.subpixel/M.sub.1, where x.sub.subpixel is the distance between subpixel centres along the first axis 1220 (and corresponds to 2*a from
[0119] This sets the condition that:
x.sub.reimaged˜viewing distance*wavelength/(2*IPD). [1]
[0120] Where the viewing distance is the distance to the observer measured along the third axis 1224, and wavelength is the wavelength of the light.
[0121] It will be appreciated that this condition does not need to be exactly met, so x.sub.reimaged may be approximately 75%-150% of this ideal value, and still generate an image of acceptable quality. This means the system can be designed based on nominal/typical values of IPD and viewing distance.
[0122] In addition, there is a further condition that the separation between groups of subpixels, x.sub.pixel, from adjacent display elements, is set by the required “eyebox” size along the first axis 1220 (i.e. its width). The “eyebox” is the region in the pupil plane (normal to the pupillary axis) in which the pupil should be contained within for the user to view an acceptable image. This condition requires that:
x.sub.pixel=viewing distance*wavelength/eyebox_width. [2]
[0123] Combining equations [1] and [2] gives:
x.sub.reimaged˜x.sub.pixel*eyebox_width/(2*IPD).
[0124] Which means that:
M.sub.1˜2*IPD*x.sub.subpixel/(x.sub.pixel*eyebox_width).
[0125] Typically, x.sub.subpixel=x.sub.pixel/2, so M.sub.1˜IPD/eyebox_width. IPD is typically 60 mm, and a required eyebox size may be in the range 4-20 mm, so M.sub.1 is likely to be in the range 3-15.
[0126] In the second dimension 1222 (y-axis), it is typical that y.sub.pixel=x.sub.pixel (i.e. it is desirable to have an eyebox that has a 1:1 aspect ratio). Also, the height of the sub-pixel is typically a large fraction of y.sub.pixel. The two central nulls of the emission cone from a group of subpixels in the second dimension 1222 are separated at the viewer by a distance of:
y.sub.distance=M.sub.2*viewing_distance*wavelength/subpixel_height˜M.sub.2*viewing_distance*wavelength/x.sub.pixel˜M.sub.2*eyebox_width˜M.sub.2*IPD/M.sub.1.
[0127] The ‘addressable viewing area’ may be taken to be approximately half this height, i.e. M.sub.2*IPD/(2*M.sub.1). If M.sub.1=M.sub.2 then the height of the addressable viewing area is ˜30 mm, which is too small to be easily usable. As discussed above, it is preferable to have M.sub.2>M.sub.1, because there are not the same constraints on M.sub.2 as on M.sub.1.
[0128] The practical upper limit for how large M.sub.2 can be set is determined by the size of the pixels. It was assumed that y.sub.reimaged=y.sub.subpixel/M.sub.2, but in practice the system is diffraction limited, and y.sub.reimaged cannot be smaller than the numerical aperture (NA) of the system multiplied by the wavelength of the light. A typical NA is <0.5 and wavelength ˜0.5 μm, so y.sub.reimaged>1 μm. For a typical system (M.sub.1=6, implying a 10 mm eyebox, 600 mm viewing distance), y.sub.subpixel=30 μm, so in this case M.sub.2⇐30, M.sub.2/M.sub.1⇐5.
[0129]
[0130] The optical system 1816 of
[0131] This offset means that a first pitch 1800 (p.sub.1) between adjacent first lens surfaces 1828 (of adjacent optical elements 1818) is larger than a second pitch 1802 (p.sub.2) between adjacent second lens surfaces 1830 (of adjacent optical elements 1818). Thus adjacent second lens surfaces 1830 are closer together than corresponding adjacent first lens surfaces. In an example, the ratio of the first pitch to the second pitch is between about 1.000001 and about 1.001, put another way, the first pitch is different from the second pitch by between 1 part in 1000 and 1 part in 1,000,000. In another example the ratio of the first pitch to the second pitch is between about 1.00001 and about 1.0001, put another way, the first pitch is different from the second pitch by between 1 part in 10,000 and 1 part in 100,000. In some examples, the second pitch 1802 depends on the focal length of the second lens surface 1830.
[0132] For optical elements 1818 towards the outer edges of the optical system/display, the offset may be greater than for optical elements 1818 towards the center of the optical system for display to ensure that the convergence is greater towards the edge than at the center. Accordingly, the offset may be based on the distance of the optical element from the center of the display and may be based on the size (width and/or height) of the optical system 1816.
[0133] In an example, the offset 1806 (x.sub.offset) measured along the first axis 1200 is given by x.sub.offset=x*f.sub.2x/viewing distance, where the viewing distance is the distance to the viewer measured along the along the third axis 1224 and f.sub.2x is the focal length of the second lens surface in the first plane.
[0134] The distance to the center of the nth optical element from the center of the central optical element of the array is x, and x=n*p.sub.1, then p2=(x−x.sub.offset)/n=p1*(1−(f.sub.2x/viewing_distance)).
[0135] Typically, f.sub.2x may be of order 100 μm, and the viewing distance is of order 600 mm, so the difference in pitch may be smaller than 1 part in 1000. As the total number of lenses may be >1000 however, x.sub.offset at the edge of the screen may be a significant fraction of the optical element's width.
[0136] Although this analysis is shown for first dimension 1220, the same principles can be applied for the second dimension 1222. As outlined above, M.sub.2 may be bigger than M.sub.1, meaning that the fractional difference in pitch may be smaller in the first dimension than in the second dimension.
[0137]
[0138] Each optical element 2018 has a first lens surface and a second lens surface 2030 spaced apart from the first lens surface in a direction along an optical axis of the optical element. The first lens surface of this example comprises two or more surface portions each optically adapted for a different specific wavelength. In this example, the first lens surface comprises a first surface portion 2000 optically adapted for light having a first wavelength λ.sub.1, a second surface portion 2002 optically adapted for light having a second wavelength λ.sub.2 and a third surface portion 2004 optically adapted for light having a third wavelength λ.sub.3. In this particular example, the light having the first wavelength is emitted by a first emitter 2006, the light having the second wavelength is emitted by a second emitter 2008, and the light having the third wavelength is emitted by a third emitter 2010. Accordingly, because of the spatial relationship between the emitters and the optical element 2018, the light of each wavelength is incident upon a particular portion of the first lens surface. Thus, the light incident upon each surface portion is predominantly light of a particular wavelength. To compensate for the wavelength dependent effects of the optical element 1818 (such as a wavelength dependent refractive index), the surface portions can be adapted for each wavelength so that the light can be converged towards a particular point 2012 in space close the observer's eyes. As explained in more detail below, these wavelength dependent effects may be more prevalent for highly dispersive materials, such as a material having a high refractive index. High refractive index materials may be needed when the optical system 1816 is bonded to a screen with an optically clear adhesive.
[0139] In this example, the surface portions can be optically adapted by having a surface curvature suitable for the dominant wavelength of light incident upon the surface portion. For example, the first surface portion 2000 is optically adapted for the first wavelength by having a first radius of curvature, the second surface portion 2002 is optically adapted for the second wavelength by having a second radius of curvature and the third surface portion is optically adapted for the third wavelength by having a third radius of curvature, where the first, second and third surface curvatures are different. The surface curvatures can be defined by a radius of curvature, for example.
[0140] As described above, a focal length in a particular plane is based on the surface curvature in that plane. Accordingly, the first lens surface (or the first surface portion 2002) has a first focal point for light having the first wavelength and the second lens surface 2030 has a second focal point for light having the first wavelength. In some examples, the first and second focal points for the light having the first wavelength are coincident. This may improve the overall image quality, by improving focus, for example. Similarly, the first lens surface (or the second surface portion 2004) has a first focal point for light having the second wavelength and the second lens surface 2030 has a second focal point for light having the second wavelength and the first and second focal points for the light having the second wavelength are coincident. Similarly, the first lens surface (or the third surface portion 2006) has a first focal point for light having the third wavelength and the second lens surface 2030 has a second focal point for light having the third wavelength and the first and second focal points for the light having the third wavelength are coincident.
[0141] In an example, each surface portion may have a spherical or toroidal profile, with a first radius of curvature r.sub.x in a first plane and a second radius of curvature r.sub.y in a second plane. If the surface portion has a spherical profile, then r.sub.x=r.sub.y. A surface with such a profile causes rays to come to a focus at a distance r/(n.sub.lens−n.sub.incident), where n.sub.lens is the refractive index of the lens material and n.sub.incident is the refractive index of the surrounding material (such as air or an optically clear adhesive). For air, n.sub.incident=1. As mentioned because n varies as a function of wavelength, there is a focal length shift for light of different wavelengths. This can be compensated by having a different radius of curvature in different regions of the lens to compensate for the change in refractive index. i.e. r.sub.x(wavelength)=f.sub.1x*(n.sub.lens(wavelength)−n.sub.incident(wavelength)), where f1x is the focal length of the surface portion in the first plane and r.sub.x and n are both functions of wavelength. A similar equation exists for r.sub.y(wavelength)=f.sub.1y*(n.sub.lens(wavelength)−n.sub.incident(wavelength)).
[0142] As mentioned, this is particularly important if the array is mounted using optically clear adhesive (n.sub.incident˜1.5) because n.sub.lens must then be higher (typically ˜1.7), and higher index materials are typically more dispersive (i.e. the refractive index will change more rapidly with wavelength). For example, the material N-SF15 has n(635 nm)=1.694 and n(450 nm)=1.725, meaning the difference in the radii of curvatures for the red and blue surface portions (i.e. the first and third surface portions) is over 4%.
[0143] As mentioned, an optically clear adhesive may be used to mount the optical systems described above onto a display panel. This can make it easier to manufacture the holographic display while also improving the display's physical robustness. To compensate for the adhesive, the optical system must be made of a material with a greater refractive index compared to the adhesive. For example, the refractive index of the material in the optical system (such as the material of the optical elements) is typically about 1.7 whereas the refractive index of the adhesive is about 1.5 to achieve the required refraction at the boundary. Because the high index material of the optical system is likely to have a higher dispersion, the optically clear adhesive may be used in conjunction with the optical system of
[0144] Example acrylic based optically clear adhesive tapes are manufactured by Tesa™, such as Tesa™ 69401 and Tesa™ 69402. Example liquid optically clear adhesives are manufactured by Henkel™, and a particularly useful adhesive is Loctite™ 5192 which has a relatively low refractive index (less than 1.5) of about 1.41, making it particularly well suited for this purpose.
[0145] The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged. For example, while the description above has considered a single colour of light, the examples can be applied to systems with multiple colours, such as those in which red, green and blue light is time division multiplexed. In addition, although two viewing positions have been discussed (allowing binocular viewing), other examples may provide more than two viewing positions by increasing the number of degrees of freedom in each display element, such as by increasing a number of sub-elements in each display element. A system with n degrees of freedom, where n is a multiple of 4, can support n/2 viewing positions and hence binocular viewing by n viewers. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.