DISPLAY DEVICE AND METHOD FOR OPTIMIZING THE IMAGE QUALITY
20210223738 · 2021-07-22
Inventors
Cpc classification
G03H1/2294
PHYSICS
G03H2001/303
PHYSICS
International classification
Abstract
The invention relates to a display device for holographic reconstruction of two-dimensional and/or three-dimensional objects. The objects include a plurality of object points. The display device comprises an illumination unit, a spatial light modulator device and a separator. The illumination device emits sufficiently coherent light. Sub-holograms of object points to be displayed are encoded in pixels of the spatial light modulator device. The separator is provided for separating adjacent point spread functions in an eye of an observer generated by the sub-holograms of adjacent object points such that the adjacent point spread functions are mutually incoherent.
Claims
1. A display device for holographic reconstruction of two-dimensional and/or three-dimensional objects including a plurality of object points, comprising: an illumination unit emitting sufficiently coherent light a spatial light modulator device, in which sub-holograms of object points to be displayed are encoded in pixels a separator for separating adjacent point spread functions in an eye of an observer generated by the sub-holograms of adjacent object points such that the adjacent point spread functions are mutual incoherent to each other.
2. The display device according to claim 1, wherein the object is divided into at least two object planes, where each object plane is divided into at least two vertical subsets and at least two horizontal subsets, which are angularly displaced or shifted relative to each other.
3. The display device according to claim 1, wherein for one-dimensional encoded holograms or for two-dimensional encoded holograms in the spatial light modulator device, the separator is designed as a color filter stripes arrangement, preferably a primary color filter stripes arrangement.
4. The display device according to claim 1, wherein each initial pixel of the spatial light modulator device is subdivided into at least two defined parts representing at least two subsets and generating at least two wave fields.
5. The display device according to claim 4, wherein a triplet of color filter stripes is assigned to each subset.
6. The display device according to claim 3, wherein the color filter stripes arrangement is an absorptive-type dye based filter arrangement or a dielectric filter arrangement, which is structured assigned to the subset.
7. The display device according to claim 4, wherein for a two-dimensional hologram to be encoded, the at least two defined parts of the initial pixel form two halves, where the pixel is separated horizontally or vertically.
8. The display device according to claim 1, wherein the separator is designed as an arrangement of patterned retarders.
9. The display device according to claim 8, wherein the arrangement of patterned retarders is provided for transforming light having a defined polarization state into two patterned light subsets.
10. The display device according to claim 8, wherein the arrangement of patterned retarders is provided in a plane of the pixels and assigned to the pixels of the spatial light modulator device, where each defined part of the initial pixel is provided with a defined patterned retarder of the arrangement of patterned retarders.
11. The display device according to claim 10, wherein the at least two defined parts of the initial pixel have different patterned retarders providing orthogonal polarization.
12. The display device according to claim 8, wherein the polarization orientations of adjacent patterned retarders, seen only in the horizontal direction or only in the vertical direction, are orthogonal to each other.
13. The display device according to claim 8, wherein the arrangement of patterned retarders is designed as an arrangement of patterned polarization filters assigned to the at least two defined parts of the initial pixels.
14. The display device according to claim 13, wherein the arrangement of patterned polarization filters provides a striped pattern, which has an alternating orientation of the polarization state transmitted.
15. The display device according to claim 13, wherein the arrangement of patterned polarization filters provides a pattern of orthogonal polarization states, which is a fixed pattern along the vertical direction (y direction) and the horizontal direction (x direction), where along the depth direction (z direction) the pattern is inverted and is used in an alternating way.
16. The display device according to claim 1, further comprising a non-patterned retarder arranged behind the spatial light modulator device, seen in the propagation direction of light, for providing light having a single exit polarization state containing two mutually incoherent wave fields.
17. The display device according to claim 1, wherein in the calculation of the sub-hologram representing the object point a wedge function is used for laterally shifting the object points within a defined angular range.
18. The display device according to claim 17, wherein the wedge function is an arbitrary shaped two-dimensional phase wedge function.
19. Display device according to claim 1, wherein the relative phase of complex values of wavefronts for the individual object points is defined in such a way that the difference between the total intensity distribution in the eye of the observer generated by the point spread functions representing adjacent object points of the object and the target intensity distribution is minimized.
20. The display device according to claim 1, wherein the amplitude of complex values of wavefronts for the individual object points is defined in such a way that the difference between the total intensity distribution in the eye of the observer generated by the point spread functions representing adjacent object points of the object and the target intensity distribution is minimized.
21. The device according to claim 1, wherein an apodization profile is provided in the plane of the pixels of the spatial light modulator device to achieve apodized sub-holograms of the individual object points of an object.
22. The display device according to claim 1, wherein the sub-holograms are modifiable in their shapes.
23. The display device according to claim 1, wherein a fixed predefined grid of object point spread functions provided in the eye of the observer is used.
24. The display device according to claim 1, wherein the illumination unit is adapted in such a way to emit two orthogonally polarized wave fields, preferably by using a wire grid polarizer structure.
25. The display device according to claim 1, wherein the illumination unit comprises at least one light source, preferably a laser or a laser diode, provided to generate a wave field.
26. The display device according to claim 1, wherein the illumination unit comprises at least one light source per primary color.
27. The display device according to claim 1, wherein the illumination unit comprises a stripe-like light source arrangement.
28. The display device according to claim 1, wherein per primary color at least two mutually incoherent light sources are provided.
29. The display device according to claim 1, wherein the spatial light modulator device is illuminated with an angular spectrum of plane waves of < 1/60° degrees along the coherent direction and 0.5° to 1° degrees along the incoherent direction.
30. The display device according to claim 1, wherein the mutual coherence field is limited to a maximum extension, the maximum extension is the size of the largest sub-hologram in the spatial light modulator device.
31. The display device according to claim 1, wherein the spatial light modulator device is designed as a complex-valued spatial light modulator device, which is able to reconstruct different incoherent object point subsets relating to different primary colors.
32. A method for optimization of the image quality of reconstructed two-dimensional and/or three-dimensional objects, where each object includes a plurality of object points, where for each object point a sub-hologram is calculated which is encoded in pixels of a spatial light modulator device, where reconstructed adjacent object points generate adjacent point spread functions in an eye of an observer, the point spread functions are separated by a separator such that the adjacent point spread functions superpose merely incoherently in the eye of the observer.
33. The method according to claim 32, wherein incoherent subsets of wave fields representing the object point to be displayed to the observer are generated and superposed incoherently.
Description
[0076] In the drawing:
[0077]
[0078]
[0079]
[0080]
[0081]
[0082]
[0083]
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095] Like reference designations denote like components in the individual figures and accompanying description, if provided. In the following, the designations “in front of” and “behind”, e.g. in front of the spatial light modulator device, mean the light seen in regards to the propagation direction of the light.
[0096] A display device for the holographic reconstruction of two-dimensional and/or three-dimensional scenes or objects comprises a spatial light modulator device 4 and an illumination unit 5. The scene or the object includes a plurality of object points as shown in
[0097] With reference to
[0098] The illumination unit 5 can contain several specific modifications to be used preferably within a holographic display device. The illumination unit can be used for coherent light and for light which only shows reduced spatial and/or temporal coherence. Amplitude apodization and phase apodization can be used to optimize the intensity profile which propagates behind the entrance plane of the illumination unit 5. Color filters give the opportunity to optimize this for different colors separately. The specifications are dependent on the discrete embodiment.
[0099] In the following, it will be described and explained the suppression of retinal inter object point crosstalk that reduces the image quality of the reconstructed scene or object point. This retinal inter object point crosstalk is caused during the holographic reconstruction of the three-dimensional scene or object.
[0100] There is a plurality of parameters to be optimized in the display device in order to obtain a required image quality. One parameter to be considered is the diameter of the entrance pupil of the human eye. For this, a priori knowledge of the points spread function is used, which is close to the real situation that applies to an observer watching a holographic three-dimensional scene. Data obtained by using an eye tracking and eye detecting system, which detects the position of an eye of an observer at a defined position relating to the display device, can be used. The diameter of an entrance pupil of the eye of the observer depends on the luminance of the scene or object the observer is watching. Thus, values might be used that refer to the present luminance of the scene or the object. Furthermore, the pictures provided by the eye tracking and eye detecting system comprising at least one camera for recording the position of the observer and especially for recording the entrance pupil of the eye of the observer can also be used to extract a more exact value of the diameter of the entrance pupil of the eye of the observer.
[0101] In principle, the eye of an observer might have an Airy shaped point spread function which is used to “pick up” the three-dimensional field emanating from an object. If the eye of the observer is focussed on an object point that is placed e.g. at 1 m, the point spread function of the object point placed at said 1 m and imaged on the retina of the eye is smaller than the point spread function of an object point e.g. placed at 0.8 m and smaller than the point spread function of an object point placed at 1.5 m. In other words, the object points the observer is focussing on are transferred to the retina of his eye with the smallest point spread function. However, object points out-of-focus or even only slightly out-of-focus have larger point spread functions as point spread functions of object points in-focus. Defocusing means widening the point spread function of the corresponding defocussed object plane.
[0102] These “pick up and wave transfer” functions, i.e. the point spread functions of the plane on that is focussed, of the wave fields of all object points of an object have to pass the same entrance pupil of the eye of the observer. Due to fact that the adjacent object points of the object on which the observer is watching are very close to each other, the transfer wave fields emanating from these object points hit the entrance pupil of the eye of the observer at the same location or place and at approximately the same angle. Thus, the phase function of the entrance pupil of the eye which has to be considered is the same. In other words, there is a common path arrangement here. The complex-valued point spread functions of adjacent object points, which are picked up and transferred to the retina, are the same. Otherwise, for object points that are very far apart slightly different point spread functions have to be considered. For example, for the transfer of object points close to the optical axis of the display device, a narrower point spread function can be used as for object points at the edge of the image that are transferred with a slightly broader point spread functions.
[0103] For minimizing the retinal inter object point crosstalk between adjacent object points of an object the following parameters should be modified: [0104] I) the relative phase emanating from the object point, [0105] II) the relative amplitude emanating from the object point, and [0106] III) the lateral position or distance of the adjacent object points to each other, which can be shifted slightly within the angular range of two adjacent diffraction orders. That is to say, a small phase wedge is used with which object points in a range of e.g. ± 1/60 degrees or ± 1/40 degrees can be shifted. Thus, it is differed slightly from an equidistant dot matrix.
[0107] For optimizing the image quality of the reconstructed object or scene, the object or the scene is divided into individual depth planes before carrying out the holographic reconstruction. These values for the relative phase, the relative amplitude and for the lateral position have to be optimized for each single discrete depth plane, e.g. 128 depth planes, for a set of entrance pupil diameters as e.g. 2 mm, 2.2 mm, 2.4 mm, . . . 3.6 mm which are correlated with the luminance presented to the eye and for each primary color RGB (red, green, blue). Thus, a generated data set including optimized values for the relative phase, for the relative amplitude and for the lateral position can be saved in a look-up table (LUT). These generated data sets can be included in the calculation of the sub-holograms to be encoded in the spatial light modulator device.
[0108] A first approach for determination of an assumable aperture of a pupil of an eye of an observer might use the average luminance to be able to choose the entrance pupil diameter which might be at least within the right range, e.g. for television 50-250 cd/m.sup.2, for a desktop monitor 100-300 cd/m.sup.2. The luminance intensity can be calculated from the image content. A second approach might use the data of an eye tracking system to measure the entrance pupil diameter and to choose the right data sub set of the look-up table.
[0109] During the calculation of a sub-hologram corresponding to an object point as one possibility for optimizing the parameters above the average luminance can be used to choose the entrance pupil diameter of the eye which might be substantially within a required range, e.g. between 25 cd/m.sup.2 and 1000 cd/m.sup.2. Another possibility can be to use the obtained data of an eye tracking and detecting system. With these data the entrance pupil diameter can be measured and the required data subset of the look-up-table can be chosen, In other words, an image recorded by a camera of the eye tracking and detecting system in connection with the distance measurement can be used to determine the diameter of the pupil.
[0110] A further possibility might be to use the distance of the entrance pupils of the eyes of an observer to define the rotation angle of the two optical axes of the eyes. In this way the point of intersection of the two optical axes which is in the focal distance of the eyes can be determined. For this an individual calibration for each observer might be required. This can be done by implementing a calibration routine which is processed by each observer once.
[0111] However, only a limited set of parameters can be modified or adapted or altered.
[0112] An example is the plurality of object points which might be real and thus in front of a display device. The eye of an observer might be focussed on this plane(s) of object points. The point spread function of the eye of the observer picks up these object points and transfers them to the retina of the eye of the observer.
[0113] There are several options to proceed, where the options can be combined if necessary or required or suitable:
[0114] 1)
[0115] A single object point can be shifted virtually in his depth plane in such a way that the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “Is/total intensity distribution on the retina of the eye of the observer I(X,Y)_retina” is minimized, where I is the intensity distribution in the plane of the retina of an eye and x and y are the coordinates within the retina of the eye, which is referred to values of an x-axis and a y-axis. This can be done by introducing small offset phase functions in the calculation of the sub-holograms to be encoded into the spatial light modulator device, in the following also referred to as SLM. Shifts of object points within an angular range of a one-dimensional or two-dimensional viewing window provided in the observer plane are irrelevant for the present invention.
[0116] 2)
[0117] The relative phase or more precisely the mutual phase difference of the individual object points can be chosen in such a way that the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “is/total intensity distribution on the retina of the eye I(X,Y)_retina” is minimized. For this, the eye of an observer is included in the calculation process. The generation of the image is calculated on the retina. Thus, the retina is the reference plane. The starting point is a scene to be encoded. An iterative optimization of the image on the retina can be carried out. In a first step all sub-holograms can be added and propagated to the retina. Then, the deviation of the total intensity distribution on the retina to the target intensity distribution on the retina can be determined. The phase, the amplitude and the position can be changed. The deviation can be redetermined. This can be carried out by using an iterative loop. A threshold of deviation can be chosen as termination condition, e.g. if the deviation is smaller than 5%. It is also possible to limit the number of iterations.
[0118] 3)
[0119] The intensity or the amplitude of the individual object points can be chosen in such a way that the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “is/total intensity distribution on the retina of the eye I(X,Y)_retina” is minimized. For this, the eye of an observer is included in the calculation process. The generation of the image is calculated on the retina. Thus, the retina is the reference plane. The starting point is a scene to be encoded. An iterative optimization of the image on the retina can be carried out. In a first step all sub-holograms can be added and propagated to the retina. Then, the deviation of the total intensity distribution on the retina to the target intensity distribution on the retina can be determined. The phase, the amplitude and the position can be changed. The deviation can be redetermined. This can be carried out by using an iterative loop. A threshold of deviation can be chosen as termination condition, e.g. if the deviation is smaller than 5%. It is also possible to limit the number of iterations.
[0120] 4)
[0121] For reasonable large object points, which may be e.g. as large as 50% of the point spread functions which pick up the object points and transfer them to the retina of the eye of the observer, the object point can be modified in such a way that the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “Is/total intensity distribution on the retina of the eye I(X,Y)_retina” is minimized. This can be done e.g. by using apodized sub-holograms representing the object points which are provided within the plane that is picked up by the point spread function of the eye. All object points the observer is watching are generated by the SLM. Thus, the complex-valued distribution present in the sub-holograms of the SLM can be used in order to generate point spread functions with reduced side lobes. This can be carried out by using apodized sub-holograms, which are able of generating point spread functions at the retina of the eye of the observer. The point spread functions should be no Airy distributions but e.g. Gauss distributions that do not have any side lobes.
[0122] Side lobes in the intensity distributions generated by the object points can be suppressed or even formed in a way to minimize the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “is/total intensity distribution on the retina of the eye I(X,Y)_retina”. Side lobes can also be increased to do so. Side lobe shape variation is used as a further parameter variation, which can reduce the difference of the total intensity distribution to the target intensity distribution on the retina of the eye of the observer I(X,Y)_retina.
[0123] Such procedure may work more efficiently for reasonable large object points of the object or scene. The changes in the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “Is/total intensity distribution on the retina of the eye I(X,Y)_retina” may not be very efficient if very small object points and thus large sub-holograms are used.
[0124] The sub-hologram apodization can be an a(x,y)_SLM (Amplitude-SLM) and a phase(x,y)_SLM (Phase-SLM) too, which result in a c(x,y)_SLM (complex-valued SLM). Thus, the apodization used within the SLM plane can be complex-valued.
[0125] 5)
[0126] For a two-dimensional (2D) encoding it is possible to shape the object points by using a modified shape of the sub-holograms used. The adapted shape of the sub-holograms is related to the complex-valued SLM c(x,y)_SLM, which e.g. uses a fixed round or quadratic/square shape only. For example, it can also be used hexagonal sub-holograms or sub-holograms that are slightly changed in the aspect ratio. In general, the complex-valued distribution can be varied. The parameters used may be dependent on the content of the three-dimensional scene. This means that the complex-valued distribution of the apodization of the sub-holograms may be changed in regard to the change of the content. In other words, the distribution of phase and amplitude of the individual sub-holograms can be varied.
[0127] 6)
[0128] If it is not possible to realize an overall optimization of the reconstructed object or scene, which includes e.g. all z-planes, where z is the longitudinal distance parallel to the optical axis of the display device, then vergence (gaze) tracking can be used to define the depth plane of interest. For this, it is determined what does the observer look at or gaze at. The eye tracking and detecting system can determine that look or gaze so that the look of the observer can be defined. Thus, the results for the encoding of the sub-holograms into the SLM can be optimized in regard to the z-plane or to the range of z-planes the observer is watching.
[0129] The options explained under 1) to 6) can be combined with each other to achieve a good or required high quality.
[0130] Although these options mentioned before can be combined, the most direct way or the more practical way is to use a fixed grid of point spread functions PSF.sub.ij and to optimize the side lobes, the relative phase difference and the intensity of the point spread functions PSF.sub.ij in order to get a reconstructed retinal image, that is reasonable close to the designed retinal image of the three-dimensional object or scene. The suffixes ij regarding the point spread function PSF.sub.ij are indices indicating points of a two-dimensional grid, preferable a virtual grid, placed at the two-dimensional, spherical curved detector plane or surface of the retina.
[0131] In the following the present invention is described for one-dimensional (1D) encoded holograms in an SLM:
[0132] In general, the options 1) to 6) described above can be used additionally to the following options for one-dimensional encoded holograms. Thus, the side lobe suppression, the retinal inter object point crosstalk reduction and the optimization in regard to the image quality can further be enhanced. The following explanations refer to one dimension only. The optimization of the retinal image in only one dimension, which means to analyse and optimize the nearest neighbours of the point spread function PSF.sub.ji in only one dimension, can be realized faster than optimizing neighboured point spread function PSF.sub.ij in two dimensions. For this reason, an e.g. iterative optimization or analytic optimization can be carried out in real time. This is fast and efficient enough for active user interaction as in gaming too.
[0133] Using the limited angular resolution of the human eye, i.e. of an eye of an observer, is one option that can be used for one-dimensional encoded holograms in an SLM. For that, several one-dimensional encoded lines of object points, which are incoherent to each other and which are seen as one encoded line, are provided. Thus, the pixel density of the incoherent direction on the SLM is increased. Each one-dimensional encoded line generates e.g. ⅓th of the object points which are presented to the observer at 1/60 degrees. A pixel density of e.g. up to 180 pixels per degree or less is used within the incoherent direction to reduce the crosstalk between adjacent object points which may be seen by the observer.
[0134] By way of example, the angular resolution of the human eye, which is 1/60 degrees in best case conditions, is equivalent to a lateral extension of objects points that can be resolved. At an average viewing distance of 3.5 m to the display device, which may be assumed generally for a television (TV), 1/60 degrees is equivalent to 1.02 mm lateral extension of two objects points to each other. Although the real resolution is significant less, a periodic interval of for instance 1.2 mm may be used as resolution limit for television applications. Real resolution mean in this context that the luminance is not provided for the best case situation or that individual aberrations of the observer eye may reduce the effective resolution obtained. This value of 1.2 mm was chosen here just to make the example as simple as possible. If a vertical holographic encoding is used, which means vertical parallax only (VPO), the sub-holograms are arranged as vertical stripes on the SLM.
[0135] Color filters can be used to reduce the frame rate mandatory for the SLM providing the complex-modulated wave field. As generally known absorptive type dye based filter arrays can be used for that, which are structured aligned to the SLM pixels. Modern coating technology makes it possible to apply notch filters e.g. in a striped arrangement too. This means that a color stripe can reflect two of the primary colors RGB while transmitting the remaining primary color. This can be done with a transmission coefficient greater than 0.9, while reflecting the two non-required wavelengths of this specific stripe with a coefficient close to 1.
[0136] For example, it can be assumed to provide three color filter stripes within a horizontal width of 1.2 mm, which is reasonable close to the best case resolution limit of the human eye ( 1/60 degrees) at 3.5 m viewing distance as explained above.
[0137] In the prior art it is known to use three color filter stripes within this width of 1.2 mm. Thus, there are three RGB color filter stripes with a width of 400 μm each. The red, the green and the blue color filter stripe have hence a width of 400 μm each.
[0138] According to
[0139] A condition for holographic display devices, which use diffractive components with e.g. a 40 degrees overall accumulated diffraction angle, is a line width of <0.1 nm of a light source of an illumination unit. Furthermore, anti-reflection coatings used, which, for example, can be applied to transparent surfaces of a backlight of the illumination unit, at grazing incidence of light and spectral selectivity of Bragg diffraction-based volume gratings used in the display device provide a stability of the center wave length of 0.1 nm of the light source. This can be achieved e.g. with diode pumped solid state (DPSS) lasers as light sources, which are e.g. available at 447 nm, 457 nm, 532 nm, 638 nm and 650 nm at an optical power of >500 mW each. Furthermore, light sources as distributed feedback (DFB) laser diodes, which have a Bragg resonator grating within the active medium or reasonable close to that medium, or wavelength stabilized laser diodes, which make use of external Bragg resonators, can also fulfill these requirements.
[0140] If the switching time of the light source, e.g. laser diodes, has to be reduced, e.g. to 1 ms, for any reasons, additional mechanical shutter or temporal synchronized color filter wheels, which are known from projectors, may be used in the illumination unit. Distributed feedback laser diodes show reasonable fast switching and can be made with different design wavelengths. Furthermore, so called Q-switched laser arrangements can be used in combination with wavelength stabilizing Bragg resonator approaches. This shows that practically available laser light sources can be used for the display device according to the invention.
[0141] At a 3.5 m distance from a viewing window in an observer plane to the display device a vertical viewing window of 8 mm would require a pixel size of 195.6 μm on the SLM. This means an approximately pixel size of 200 μm. Thus, the vertical pixel pitch is larger than the horizontal pixel pitch.
[0142] If it would be used an average viewing distance of 1.5 m only instead of 3.5 m from a viewing window in an observer plane to the display device the given numbers of the pixel dimensions has to be divided by a factor of 2.3. This could be the case in some cases where required. For holographic 1D encoded 3D television applications a 3.5 m distance is, however, more reasonable.
[0143]
[0144] Of course, it is also possible to arrange the color filter arrangement in the horizontal direction if the encoding direction lies in the horizontal direction.
[0145] A schematic representation of object points reconstructed by the part of the SLM shown in
[0146]
[0147]
[0148]
[0149]
[0150]
[0151] The procedure according to the color red is applied to the other primary colors green and blue accordingly.
[0152] As a consequence, seven white object points are reconstructed by using three primary colors RGB with three laterally displaced color filter stripes, which are allocated to each primary color RGB. Within the horizontal angular range of 1/60 degrees there are provided the vertical color filter stripes denoted by r1, g1, b1, r2, g2, b2, r3, g3 and b3 as can be seen in
[0153] It is important to prepare the mutual coherence of mutual columns of the SLM used for the 1D encoding in a way that adjacent columns are mutually incoherent to each other. This can be done by using a stripe-like light source arrangement in the illumination unit.
[0154] As can be seen from
[0155] As can be seen further from
[0156] In general, no superposition of the individual circles means that sufficient separation of the adjacent point spread functions on the retina of the eye of an observer is provided. However, there might be a small portion of light, which still superposed with two adjacent coherent object points reconstructed. But that has no significant effect on the quality of the reconstructed scene or objects. In addition, these small values of residual errors of the target intensity distribution to be obtained on the retina of the eye of the observer can be considered and used in an optimization algorithm of the optimization process, which approximates the detected retinal image to the target retinal image meaning without recognizable retinal inter object point crosstalk. The algorithm refers to a target/actual comparison and an iterative variation of parameters. Further optimization of the retinal image for avoiding of retinal inter object point crosstalk can be provided by applying e.g. individual or all of the options described and explained above under items 1) to 6).
[0157] The described SLM comprising a separator designed as a color filter stripes arrangement is illuminated by the illumination unit having at least one light source emitting an angular spectrum of plane waves of e.g. 0.5 degrees to 1 degree in the horizontal direction. Such an angular spectrum of plane waves is sufficient to span a horizontal sweet spot in an observer plane if the coherent direction is the vertical direction and vice versa. The angular spectrum of plane waves is preferable significantly smaller than 1/60 degrees, e.g. 1/120 degrees only, along the vertical direction, which is the direction of the encoding of the sub-hologram of the one-dimensional (1D) encoded holographic display device for the reconstruction of three-dimensional scenes or objects.
[0158] An encoding unit or computation unit provided in the display device splits the content, preferable the high definition (HD) content, of the object point into the subsets accordingly to
[0159] The embodiment schematically shown in
[0160]
[0161]
[0162]
[0163]
[0164] With providing of a color filter stripes arrangement as a separator on the SLM the mutual coherence between adjacent color filter stripes of the color filter stripes arrangement can be eliminated. For this, a spatial extended light source can be used in the illumination unit. The aspect ratio of the light source to be collimated can be e.g, 1:60. In this manner, there is no coherence in the horizontal direction (no encoding direction). Thus, coherent superposition of adjacent color filter stripes and disturbing the image quality caused in this way can be prevented.
[0165] According to the invention, the additional vertical separation introduced by using additional color filter stripes in addition to one set of color filter stripes (only comprising one red stripe, one green stripe and one blue stripe) and thus the higher pixel count eliminate the mutual coherence between object points, which are neighbours along the vertical direction. This effects an additional reduction of the mutual coherence and thus a further reduction of the retinal inter-object point crosstalk.
[0166] However, the coherence of inner axial object points still exists. The expression “coherence of inner axial object points” refers to the coherence of object points sharing a common overlap region of their sub-holograms, encoded as one-dimensional (1D) lens line segments. This means that it has not to be dealt with all the other object point crosstalk anymore, despite the crosstalk generated by object points referring to a single color filter, where the object points are positioned behind each other, which means along the z-direction parallel to the optical axis of the display device, and are positioned adjacent to each other, which means in a plane that is perpendicular to the z-axis, in an out-of-focus situation. This means in the situation the observer is looking at a different plane and the plane, which is considered here, is not in focus.
[0167] The optimization described above has to be applied to a reduced number of defined object points only. This means, for the color filter stripes arrangement and for a one-dimensional encoding of holograms the optimization is only carried out in one dimension and, for example, only for 3 to 4 neighboring object points.
[0168]
[0169] In case of
[0170] The following explanations refer to the illumination unit comprising at least one light source which can be used for a one-dimensional encoding of holograms. The coherence of the light emitted by at least one light source has to be as low as possible but as high as requested for a holographic encoding. A tracking angle to be introduced for tracking a viewing window in an observer plane according to a movement of an observer and additional diffractive optical elements provided in the display device introduce an optical path difference within a region based on the extension of a sub-hologram. Therefore, the line width of the light source designed e.g. as a laser light source to be used has to be s 0.1 nm. In addition to the optical path difference introduced, an increased line width would also introduce a smearing in the reconstruction. The smearing may be due to the diffractive dispersion introduced by the diffractive optical elements used in the display device. In the process all effects sum up.
[0171] The line width of the light source of the illumination unit, which has to be ≤0.1 nm, is only one aspect of the coherence. Another aspect is the extension of the spatial coherence or more explicit the absolute value of mutual coherence. The mutual coherence between adjacent color filter stripes can be eliminated as disclosed above while sufficient coherence of the light, e.g. >0.8, can be provided along the direction of the color filter stripes, i.e. along the encoding direction. Additionally, the mutual coherence region, which is tailored to be a one-dimensional line-like segment orientated in parallel to the color filter stripe(s), is limited to a maximum extension according to the size of the largest sub-hologram.
[0172] For specifying the maximum of the optical path difference and thus the line width of the light source used or the maximum extent of the mutual coherence not the entire size of the viewing window and its projection onto the SLM, which can be used to define the size of the sub-hologram, has to be considered. It is better to consider only the entrance pupil of the human eye or of the eye of the observer. The entrance pupil of the eye can be used to specify the maximum of the optical path difference and thus the line width of the light source used or the maximum extent of the mutual coherence in order to obtain the required coherence parameters having the lowest coherence properties.
[0173] The reducing of the coherence of the light used is a basic requirement to provide high image contrast and the intended retinal image without disturbing effect in other words, it is important to reduce the coherence of the light in such a way that reasonable high coherence as required is provided in order to prevent unintentional coherent crosstalk. Further, the complex-valued point spread functions of the entire system, which includes the illumination unit, the SLM and the retina of the eye of an observer, i.e. the complete display device in connection with the eye of the observer, has to be optimized too.
[0174] In the following the present invention Is described for two-dimensional (2D) encoded holograms in an SLM, which in detail use procedures for reduction of the retinal inter object point crosstalk:
[0175] The relation to a two-dimensional (2D) encoding of holograms has several aspects. The general requirements of optimizing the point spread functions in relation to the final design intensity distribution or to the target intensity distribution of the perfect image detected by the retina of the eye of an observer is already described and explained above to the one-dimensional encoded holograms.
[0176] The generation of independent and mutual incoherent subsets of the three-dimensional (3D) object representing wave fields, which is already described for one-dimensional (1D) encoded holograms, can also be applied to two-dimensional (2D) encoded holograms. In other words, a separator designed as a color filter arrangement can also be applied to two-dimensional encoded holograms. The color filter arrangement has to be adapted to the SLM used, in which the holograms are encoded in two coherent directions. For example, it can be used a Bayer color filter array or Bayer pattern as color filter arrangement.
[0177] For reducing of crosstalk between adjacent point spread functions on the retina of the eye of an observer, for example, it can be used a standard pixel aperture of a pixel of the SLM, which is e.g. 33 μm×33 μm for a two-dimensional encoded three-dimensional holographic display device used at a viewing distance of 600 mm. For the sake of simplicity, a rectangular shaped pixel aperture of a pixel can be assumed. Furthermore, apodization profiles can be applied, as e.g. a Gauss-type amplitude apodization or a so-called Kaiser-Bessel-Window.
[0178] By way of example, it is assumed to use an SLM having rectangular shaped apertures of pixels. This is illustrated in
[0179]
[0180] As disclosed above, it is assumed an SLM having rectangular shaped apertures of the pixels. However, the pixels are for example now non-quadratic and have a ratio of the width to the height of 1 to 2. Such an SLM is shown in
[0181]
[0182] By way of example, if it is assumed a wavelength for the blue primary color of λ=450 nm, a focal length of the volume grating based field lens used within a desktop-type holographic three-dimensional (3D) display device of f=600 mm and a pixel pitch of 33 μm, a viewing window in an observer plane formed by the blue light has then an extension of approximately 8 mm times 8 mm. The 3.sup.rd diffraction order is provided at approximately 24 mm from the zero diffraction order spot. For a wavelength of λ=650 nm, being assumed for the read primary color, the 3.sup.rd diffraction order is provided at approximately 35 mm from the zero diffraction order spot. This means that for an average distance of the two eyes of an observer of 65 mm, a distance of 35 mm is sufficient.
[0183]
[0184] The right subset of the initial SLM shown in
[0185] Different types of subsets of the SLM can be used in order to generate incoherent subsets of wave fields representing the three-dimensional (3D) holographic object to be displayed to the observer. For generating of incoherent subsets of wave fields a separator can be used. As separator a color filter stripes arrangement providing spatial separated colors, an arrangement of patterned retarders providing spatial separated orthogonal polarization states or a light source arrangement in the illumination unit providing spatial separated allocation of the wave field illuminating the SLM can be used.
[0186] The physical 50% addressing of the SLM is used. For the sake of simplicity and for a simple explanation of the present invention it is concentrated on simple embodiments of the present invention only. Simple embodiments mean to use only the simple subsets of an SLM, i.e. e.g. to use the two simple subsets of the SLM shown in
[0187] If the fill factor FF is much smaller than shown in
[0188] The following describes an embodiment of an SLM provided with a separator, which is designed as an arrangement of patterned retarders. An arrangement of patterned retarders is used for transforming light incident on the SLM and having an initial polarization state, which might be e.g. a linear polarization state, into two patterned subsets of the light. The two patterned subsets of the light have an orthogonal polarization state. For example, the primary e.g. quadratic/square shaped pixel aperture, as can be seen e.g.
[0189] In the following section it is described if one, two or several light sources per color are provided. If classic optics or in general non-polarization-selective optics would be used to form the plane of the viewing window, then the described embodiments in order to generate two spatial interlaced subsets of the wave field representing the three-dimensional object to be presented to the observer can be used. Adjacent object points imaged on the retina of an eye of an observer show orthogonal polarization states and interfere thus in the same way as mutual incoherent points or in more detail as retinal point spread functions. In other words, along one direction there is no coherence. Thus, there is no coherent retinal inter object point crosstalk between adjacent object points, which are adjacent point spread functions on the retina of the eye of the observer, along one direction.
[0190] However, if the optical elements following the SLM within the beam path are polarization selective or require only a single polarization state, a different way has to be used in order to implement two mutually incoherent wave fields. In this case a common exit polarization state has to be used. This means that no mutual incoherence would exist if a single primary light source is used.
[0191] Per primary color at least two mutually incoherent light sources should be used, which illuminate the SLM. The SLM comprises e.g. a separator designed as an arrangement of patterned retarders. The arrangement of patterned retarders is assigned to the pixels of the SLM. Preferably, the arrangement of patterned retarders is designed as an arrangement of patterned polarization filters assigned to the at least two defined parts of the pixels, especially to the two subsets of the pixel apertures of the SLM.
[0192] For example, it can be used a wedge-type illumination unit, which is optimized in order to accept two orthogonally polarized wave fields. One wave field comes from a first light source of the illumination unit. This light can be e.g. TE (transverse electric) polarized. Another wave field comes from a second light source of the illumination unit. This light can be e.g. TM (transverse magnetic) polarized. Finally, the SLM is illuminated with both wave fields.
[0193]
[0194] For other applications e.g. three or more mutually incoherent exit beams out of an SLM can be generated. These exit beams are linear polarized.
[0195] In
[0196]
[0197] A nested arrangement of two subsets of a pixel of an SLM is shown in
[0198] The adding of a further single polarizing filter behind the SLM, seen in the propagation direction of the light, provides a single light exit polarization state, which contains two mutually incoherent wave fields both carrying a part of the three-dimensional object scene.
[0199] This wave field can now propagate through all optical elements of the display device, which follow within the beam path regardless of the polarization selectivity of these elements. For example, a polarization-type LC grating following the SLM in the beam path has to be illuminated with circular polarized light; a retarder has then to be used for providing the required polarization state of the wave field illuminating it.
[0200] Also, in two-dimensional (2D) encoding of holograms an arrangement of color filter stripes can be used in the SLM plane. For this it might be more complex since an initial pixel aperture of the pixel of the SLM, which can be e.g. 33 μm times 33 μm for a holographic three-dimensional desktop display device, has to be divided into at least three sub-pixels or three subsets or generally into three defined parts of the pixel.
[0201]
[0202] A sub-pixel or a subset of the pixel comprising a color filter segment of the arrangement of color filter stripes relating to one of the primary colors RGB has an extension of e.g. 16 μm times 16 μm only. It is probably expensive to realize pixels as small as this. However, it could be possible in a few years without high technical effort. In addition, a small critical dimension is required within the manufacturing of the pixels in order to keep the fill factor as high as possible. Thus, e.g. a critical dimension of 3 μm might be required in order to realize color filters within a two-dimensional encoded complex-valued SLM.
[0203] Furthermore, an arrangement of two-dimensional color filter stripes might be combined advantageously with an arrangement of patterned retarders designed e.g. as orthogonal polarization filters. However, this could reduce the practical critical dimension in the manufacturing of the SLM e.g. down to 2 μm only. The initial pixel size of e.g. 33 μm×33 μm has to be divided e.g. into 6 defined part or subsets of the pixel or sub-pixels. This means three colors in relation to the color filter stripes and two additional patterned polarization filters. Each polarization filter is assigned to a triplet of color filter stripes. Thus, each primary color RGB is represented by two small subset of the pixel. The two subsets of the pixel emit orthogonally polarized light.
[0204] For example, each pixel aperture shown in
[0205] In addition to rectangular arrangements of the apertures of the pixels of an SLM also e.g. hexagonal arrangements of the apertures of the pixels may be used. These arrangements can also be provided with an arrangement of patterned retarders, preferably patterned polarization filters, and/or an arrangement of patterned color filter stripes.
[0206] The probably more practical realization of two orthogonal polarizations of the light emitted by the SLM could be in general to encode a wedge function into the sub-hologram of the SLM. In this manner object points within the angular range spanned by the viewing window can be shifted laterally. For a two-dimensional encoding of a hologram this can be done along the vertical direction as well as for the horizontal direction. In other words, a left separation and a right separation of a quadratic/square area of a pixel, as can be seen e.g. in
[0207] Recapitulating, according to the present invention there are for example two or even more subsets of wave fields generated by an SLM of the display device, which are mutually incoherent. In the case of a one-dimensional encoding an arrangement of color filter stripes, an arrangement of patterned retarders, particularly an arrangement of polarization filters having orthogonal polarization, or combinations thereof can be used in order to provide mutually incoherent subsets of wave field partially representing a three-dimensional object or scene. As for the case of two-dimensional encoding in one-dimensional encoding it is also possible to illuminate an SLM with light that has two orthogonal states of polarization and that is emitted from different light sources in the illumination unit. This light can illuminate a striped pattern of a polarization filter, which has an alternating orientation of the polarization state transmitted. Also, as for the case of two-dimensional encoding the polarization filter is followed by an additional non-patterned retarder, particularly a polarization filter, which transmits a single polarization state only. It could be that light gets lost here. But there are two mutual incoherent encoded wave fields now, which illuminate the optical elements following the SLM in the beam path of the display device. It can also be encoded an additional phase wedge in a sub-hologram along one direction. In contrast to the case of one-dimensional encoding, the two-dimensional encoding offers the realization of arbitrary shaped two-dimensional phase wedge functions encoded in the sub-holograms of the SLM. Only one subset of the potential two-dimensional wedge distributions is needed for that.
[0208] An advantageous polarization encoding pattern of adjacent object points is given by a checkerboard-like distribution, which is applied for the object points reconstructed. Furthermore, a honey comb-like distribution may also be used, which also provides two orthogonal polarizations. This is provided in the plane of the object points or in the plane of the retina of an eye of an observer in the case the observer focuses on the object point. Furthermore, it is also possible to use other e.g. random distributions of the mutual incoherent pattern.
[0209] In
[0210] This simple grid of
[0211] The following explanations refer to the calculation of point spread functions PSF.sub.ij for retinal inter object point crosstalk reduction relating to head-mounted displays. However, as it can be seen before, the retinal point spread function optimization can be used for all types of sub-hologram based holographic display devices, for one-dimensional encoding and two-dimensional encoding too. Consequently, the invention can also be used for direct view display devices, e.g. for desktop display devices using two-dimensional encoding of television display devices using one-dimensional vertical parallax only encoded holograms.
[0212] The most simple case is one-dimensional encoded vertical parallax only (VPO) holography, which uses vertical orientated sub-holograms. If an light source of an illumination unit is adapted in order to provide optimized absolute value of the complex degree of coherence, i.e. it is not a simple point light source, it can be ensured that only the pixels of one vertical line that have a mutual distance of equal or less than the size of the largest sub-hologram are mutually coherent.
[0213] Assume one-dimensional vertical parallax-only encoding and tailored illumination of the illumination unit the optimization of adjacent point spread functions can be carried out along one direction and for each column of the SLM separately. Furthermore, only close neighbors of the discrete point spread function to be optimized have to be considered.
[0214] For example, taken a sub-hologram of the upper left corner of the SLM each color can be provided separately, and the retinal point spread functions PSF.sub.ij can be calculated then. The index i may be used to mark the column of the SLM and the index j can be used to mark the row of the SLM used in the calculation process. These are indices of a retinal grid of object points generated in space. These indices may also be used to indicate the discrete sub-holograms relating to the retinal object points. A defined diameter of the entrance pupil of the human eye can be assumed in relation to the brightness of the scene, which is e.g. 2.9 mm for 100 cd/m.sup.2. It could be that all non-optimized sub-holograms are already generated or that they will be just generated after each other. For example, it is assumed that all non-optimized sub-holograms were already generated. Then a first point spread function PSF.sub.11 is calculated.
[0215] The computational load of the optimization process can be concentrated on the high definition (HD) cone. This means a high definition 1/60 degrees resolution can only be seen in a central cone, e.g. at an angle of approximately 10 degrees. During the optimization it can be concentrated primarily on that central cone. Thus, more power in the high definition (HD) cone can be used as for other areas, e.g. at the edge of the retina. For a single observer in the observer plane one high definition cone per eye and color is provided. The number of cones depends on the number of observers. Gaze tracking is required to provide the high definition cone accurately. This means that it is preferred to integrate gaze tracking in the display device.
[0216] Moreover, thinned objects in the non-high definition cone area can be used. For example, at the rim of the field of view, 4×4 thinning can be used for a two-dimensional encoding, as long as the object points are reconstructed with 16 time larger brightness. This is not a problem because the optical energy per area is kept constant. For a two-dimensional encoding it may be used every fourth object point only along the vertical direction and along the horizontal direction. For vertical parallax-only encoded holograms a four times thinning can only be carried out along the columns of the SLM.
[0217] It can also be possible to project one high definition cone per eye and color into a low resolution frustum. This might be a combination of a direct view display device and a projection display device. Or it can be a combination of a large low resolution frustum generating display device and a high definition cone generating display device, which is defined by the gaze tracking data. However, this could add possibly significant technological effort.
[0218] Coming back to a second sub-hologram of the vertical parallax-only encoded hologram example, which generates a point spread function PSF.sub.12 on the retina of an eye of an observer. Now, e.g. only the second sub-hologram is changed, i.e. the phase offset to the point spread function PSF.sub.11 and the intensity value of the point spread function PSF.sub.12 are changed in order to obtain the target intensity of the point spread function PSF.sub.11 plus the point spread function PSF.sub.12, which is the design intensity. That is to say e.g. it is used a phase offset and an intensity change. Then, a point spread function PSF.sub.13 is placed adjacent to the two coherently added point spread functions PSF.sub.11 and PSF.sub.12. Now, it is used once again e.g. a phase offset and an intensity change to change the initial point spread function PSF.sub.13 in order to obtain the design intensity distribution of the coherent sum of the point spread functions PSF.sub.11+PSF.sub.12+PSF.sub.13. This can be demonstrated by the way from j to j+1 to j+2 . . . j+N, i.e. to the last point spread function PSF.sub.ij formed by the discrete column of the SLM, here column 1. Then, the next column of the SLM is carried out. For vertical parallax-only encoded holograms the optimization process made along the columns of the SLM can be carried out in parallel. This is due to the fact that the columns of the SLM are mutually incoherent if using the tailored illumination. To make and keep the calculation and optimization algorithm fast and simple the peak intensity value of the point spread function provided locally on the retina can be used as criterion for the optimization process. It could still make sense to e.g. use the integral intensity value of an angular range of 1/60 degrees instead the single peak intensity value. The difference is, however, small. Using e.g. three or more sampling points of a single point spread function for the optimization may add more effort, i.e. more computational load.
[0219] For a two-dimensional encoding of holograms the optimization can be carried out in an analogue way to the one-dimensional encoding of holograms. It may be started e.g. in the upper left corner of the sub-holograms or the retinal point spread functions PSF.sub.ij of the object points. A first point spread function PSF.sub.11 is formed and a second point spread function PSF.sub.12 is added. This summed up point spread function is optimized by using a phase offset and a change of the intensity if required. Then, e.g. a point spread function PSF.sub.21 is added and optimized using phase and intensity too. Now, a point spread function PSF.sub.22 is added and the phase offset and the intensity value on demand are changed. Then, e.g. a point spread function PSF.sub.13 is added and the phase offset and the intensity value are optimized. Next indices of the point spread function PSF.sub.ij may be e.g. 23 and 31, followed e.g. by 14 and so on. This means, for example, it can be started from the upper left corner of the sub-hologram and to fill and optimize the scene step by step until the lower right corner is reached.
[0220] Different paths for this optimization process may be used. For example, it can be started with a point spread function PSF.sub.11 and then go to point spread functions PSF.sub.12, PSF.sub.13, PSF.sub.14, . . . to PSF.sub.1N, where N is the number of vertical object points to be generated, e.g. 1000 object points or even 2000 object points. The number of the object points being generated horizontally in M might be e.g. 2000 to 4000. In detail, this could mean that at first the first column of the sub-hologram is filled and completed and then step by step the elements of the second column are added, i.e. point spread function PSF.sub.21, PSF.sub.22, PSF.sub.23, PSF.sub.24, . . . to PSF.sub.2N. Here, the step by step filling and optimizing is carried out from the left hand side to the right hand side of the sub-hologram. In this manner a two-dimensional matrix in M,N can be created.
[0221] This optimization continuing along a predefinable direction on the SLM can also be carried out in a parallel way, e.g. if a multi-core integrated circuit is used. Thus, the starting points in the sub-hologram can be chosen in an arbitrary way or at least several starting points can be chosen. If locally optimized zones (zones that are filled during the optimization) of the sub-hologram hit each other, then the transition zones can be optimized. This can already be done if the mutual gap is e.g. five point spread functions PSF.sub.ij only. This means that a point spread function may be added to the rim of one zone and the small part of the rim of the neighboring zone can already be considered during the filling of the gap, which exists between two adjacent zones.
[0222] Randomized local optimization using multiple randomized starting points may be used to avoid the appearance of artificial and disturbing low spatial frequency modulations. The optimization process can be made simple by only using a phase offset and intensity offset of single point spread functions PSF.sub.ij.
[0223] For increasing the calculation speed, which might be required for real time applications, a look-up-table (LUT) can be used for image segments that can already be optimized in advance, as e.g. lines, surfaces, triangles and small separated objects.
[0224] If gaze tracking data are already used, e.g. in order to use the 10 degrees high definition cone approach in e.g. direct view displays, and if the eye tracking data are used to obtain the diameter of an entrance pupil of an eye of an observer, the point spread functions of the eye picking up the object points in space can be monitored. This means that point spread function data can be used that are closer to the real situation. Thus, better optimization results can be obtained. A look-up-table can also be used to represent different point spread functions of the human eye, i.e. different diameter of the entrance pupil of the eye and different focal length f.sub.eye.
[0225] The optimization process described for a head-mounted display can be used, of course, for other display devices, as e.g. direct view display devices, projection display devices.
[0226] Finally, it must be stated explicitly that the embodiments to the display device described according to the invention shall solely be understood to illustrate the claimed teaching, but that the claimed teaching is not limited to these embodiments. Combinations of embodiments are also possible.