DISPLAY DEVICE AND METHOD FOR OPTIMIZING THE IMAGE QUALITY

20210223738 · 2021-07-22

    Inventors

    Cpc classification

    International classification

    Abstract

    The invention relates to a display device for holographic reconstruction of two-dimensional and/or three-dimensional objects. The objects include a plurality of object points. The display device comprises an illumination unit, a spatial light modulator device and a separator. The illumination device emits sufficiently coherent light. Sub-holograms of object points to be displayed are encoded in pixels of the spatial light modulator device. The separator is provided for separating adjacent point spread functions in an eye of an observer generated by the sub-holograms of adjacent object points such that the adjacent point spread functions are mutually incoherent.

    Claims

    1. A display device for holographic reconstruction of two-dimensional and/or three-dimensional objects including a plurality of object points, comprising: an illumination unit emitting sufficiently coherent light a spatial light modulator device, in which sub-holograms of object points to be displayed are encoded in pixels a separator for separating adjacent point spread functions in an eye of an observer generated by the sub-holograms of adjacent object points such that the adjacent point spread functions are mutual incoherent to each other.

    2. The display device according to claim 1, wherein the object is divided into at least two object planes, where each object plane is divided into at least two vertical subsets and at least two horizontal subsets, which are angularly displaced or shifted relative to each other.

    3. The display device according to claim 1, wherein for one-dimensional encoded holograms or for two-dimensional encoded holograms in the spatial light modulator device, the separator is designed as a color filter stripes arrangement, preferably a primary color filter stripes arrangement.

    4. The display device according to claim 1, wherein each initial pixel of the spatial light modulator device is subdivided into at least two defined parts representing at least two subsets and generating at least two wave fields.

    5. The display device according to claim 4, wherein a triplet of color filter stripes is assigned to each subset.

    6. The display device according to claim 3, wherein the color filter stripes arrangement is an absorptive-type dye based filter arrangement or a dielectric filter arrangement, which is structured assigned to the subset.

    7. The display device according to claim 4, wherein for a two-dimensional hologram to be encoded, the at least two defined parts of the initial pixel form two halves, where the pixel is separated horizontally or vertically.

    8. The display device according to claim 1, wherein the separator is designed as an arrangement of patterned retarders.

    9. The display device according to claim 8, wherein the arrangement of patterned retarders is provided for transforming light having a defined polarization state into two patterned light subsets.

    10. The display device according to claim 8, wherein the arrangement of patterned retarders is provided in a plane of the pixels and assigned to the pixels of the spatial light modulator device, where each defined part of the initial pixel is provided with a defined patterned retarder of the arrangement of patterned retarders.

    11. The display device according to claim 10, wherein the at least two defined parts of the initial pixel have different patterned retarders providing orthogonal polarization.

    12. The display device according to claim 8, wherein the polarization orientations of adjacent patterned retarders, seen only in the horizontal direction or only in the vertical direction, are orthogonal to each other.

    13. The display device according to claim 8, wherein the arrangement of patterned retarders is designed as an arrangement of patterned polarization filters assigned to the at least two defined parts of the initial pixels.

    14. The display device according to claim 13, wherein the arrangement of patterned polarization filters provides a striped pattern, which has an alternating orientation of the polarization state transmitted.

    15. The display device according to claim 13, wherein the arrangement of patterned polarization filters provides a pattern of orthogonal polarization states, which is a fixed pattern along the vertical direction (y direction) and the horizontal direction (x direction), where along the depth direction (z direction) the pattern is inverted and is used in an alternating way.

    16. The display device according to claim 1, further comprising a non-patterned retarder arranged behind the spatial light modulator device, seen in the propagation direction of light, for providing light having a single exit polarization state containing two mutually incoherent wave fields.

    17. The display device according to claim 1, wherein in the calculation of the sub-hologram representing the object point a wedge function is used for laterally shifting the object points within a defined angular range.

    18. The display device according to claim 17, wherein the wedge function is an arbitrary shaped two-dimensional phase wedge function.

    19. Display device according to claim 1, wherein the relative phase of complex values of wavefronts for the individual object points is defined in such a way that the difference between the total intensity distribution in the eye of the observer generated by the point spread functions representing adjacent object points of the object and the target intensity distribution is minimized.

    20. The display device according to claim 1, wherein the amplitude of complex values of wavefronts for the individual object points is defined in such a way that the difference between the total intensity distribution in the eye of the observer generated by the point spread functions representing adjacent object points of the object and the target intensity distribution is minimized.

    21. The device according to claim 1, wherein an apodization profile is provided in the plane of the pixels of the spatial light modulator device to achieve apodized sub-holograms of the individual object points of an object.

    22. The display device according to claim 1, wherein the sub-holograms are modifiable in their shapes.

    23. The display device according to claim 1, wherein a fixed predefined grid of object point spread functions provided in the eye of the observer is used.

    24. The display device according to claim 1, wherein the illumination unit is adapted in such a way to emit two orthogonally polarized wave fields, preferably by using a wire grid polarizer structure.

    25. The display device according to claim 1, wherein the illumination unit comprises at least one light source, preferably a laser or a laser diode, provided to generate a wave field.

    26. The display device according to claim 1, wherein the illumination unit comprises at least one light source per primary color.

    27. The display device according to claim 1, wherein the illumination unit comprises a stripe-like light source arrangement.

    28. The display device according to claim 1, wherein per primary color at least two mutually incoherent light sources are provided.

    29. The display device according to claim 1, wherein the spatial light modulator device is illuminated with an angular spectrum of plane waves of < 1/60° degrees along the coherent direction and 0.5° to 1° degrees along the incoherent direction.

    30. The display device according to claim 1, wherein the mutual coherence field is limited to a maximum extension, the maximum extension is the size of the largest sub-hologram in the spatial light modulator device.

    31. The display device according to claim 1, wherein the spatial light modulator device is designed as a complex-valued spatial light modulator device, which is able to reconstruct different incoherent object point subsets relating to different primary colors.

    32. A method for optimization of the image quality of reconstructed two-dimensional and/or three-dimensional objects, where each object includes a plurality of object points, where for each object point a sub-hologram is calculated which is encoded in pixels of a spatial light modulator device, where reconstructed adjacent object points generate adjacent point spread functions in an eye of an observer, the point spread functions are separated by a separator such that the adjacent point spread functions superpose merely incoherently in the eye of the observer.

    33. The method according to claim 32, wherein incoherent subsets of wave fields representing the object point to be displayed to the observer are generated and superposed incoherently.

    Description

    [0076] In the drawing:

    [0077] FIG. 1 shows a schematic representation of a display device in connection with a method for the reconstruction of a three-dimensional object with a computer-generated hologram;

    [0078] FIG. 2 shows intensity distributions of point spread functions, where adjacent point spread functions are superposed, according to the prior art

    [0079] FIG. 3 shows a separator designed as a color filter stripes arrangement according to the present invention,

    [0080] FIG. 4 shows single lines of seven white object point reconstructed by the part of a spatial light modulator device shown in FIG. 1,

    [0081] FIG. 5 shows an illustration of a retinal placement of focussed and non-focussed object points by an observer looking at a scene including object points,

    [0082] FIG. 6 shows a part of a spatial light modulator device, which means ten times ten pixels, having pixel apertures and a fill factor of 0.9, where a binary amplitude transmission is provided,

    [0083] FIG. 7 shows an intensity distribution of a Fourier transformation of the intensity distribution shown within FIG. 6 representing the amplitude distribution of a plane of the spatial light modulator device,

    [0084] FIG. 8 shows a part of a spatial light modulator device using only the right half of the pixel apertures and a fill factor of approximately 0.5, where a binary amplitude transmission is provided,

    [0085] FIG. 9 shows an intensity distribution of a Fourier transformation of the intensity distribution shown within FIG. 8 representing the amplitude distribution of a plane of the spatial light modulator device,

    [0086] FIG. 10 shows a part of a spatial light modulator device using only the left half of the pixel apertures and a fill factor of approximately 0.5, where a binary amplitude transmission is provided,

    [0087] FIG. 11 shows an illustration of a two-dimensional wire grid polarizer structure used in an illumination unit of the display device according to the present invention,

    [0088] FIG. 12 shows a part of a spatial light modulator device having pixel apertures and a fill factor of 0.5, where a binary amplitude transmission is provided and a patterned polarisation filter for the transmission of a horizontal orientated electrical field is used,

    [0089] FIG. 13 shows a part of a spatial light modulator device having pixel apertures and a fill factor of 0.5, where a binary amplitude transmission is provided and a patterned polarisation filter for the transmission of a vertical orientated electrical field is used,

    [0090] FIG. 14 shows a part of a spatial light modulator device provided with an arrangement of patterned retarders, where two subsets of a pixel of the spatial light modulator device are nested, where the two subsets have orthogonal exit polarization states,

    [0091] FIG. 15 shows a part of a spatial light modulator device having pixel apertures and a fill factor of approximately 0.25, where a binary amplitude transmission is provided,

    [0092] FIG. 16 shows an intensity distribution of a Fourier transformation of the intensity distribution shown within FIG. 15,

    [0093] FIG. 17 shows a part of a spatial light modulator device provided with an arrangement of patterned retarders, where two subsets of a pixel of the spatial light modulator device are nested orthogonal to the one of FIG. 14, where the two subsets have orthogonal exit polarization states,

    [0094] FIG. 18 shows an illustration of a checkerboard-like allocation pattern of orthogonal polarization states, which refers to three-dimensional object points reconstructed in space or at a retina of an eye of an observer.

    [0095] Like reference designations denote like components in the individual figures and accompanying description, if provided. In the following, the designations “in front of” and “behind”, e.g. in front of the spatial light modulator device, mean the light seen in regards to the propagation direction of the light.

    [0096] A display device for the holographic reconstruction of two-dimensional and/or three-dimensional scenes or objects comprises a spatial light modulator device 4 and an illumination unit 5. The scene or the object includes a plurality of object points as shown in FIG. 1. FIG. 1 schematically represents the encoding of a scene or object into the spatial light modulator device 4. A three-dimensional object 1 is constructed from a plurality of object points, of which only four object points 1a, 1b, 1c and 1d are represented here in order to explain the encoding. A virtual observer window 2 is furthermore shown, through which an observer (indicated here by the eye represented) can observe a reconstructed scene. With the virtual observer window 2 as a defined viewing region or visibility region and the four selected object points 1a, 1b, 1c and 1d, a pyramidal body is respectively projected through these object points 1a, 1b, 1c and 1d and in continuation onto a modulation surface 3 of the spatial light modulator device 4 (only represented partially here). In the modulation surface 3, this results in encoding regions in the spatial light modulator device 4, where the shape of the encoding region has not to be corresponded with the shape of the viewing window 2. That is to say, the encoding region on the spatial light modulator device 4 can also be larger or smaller as specified by the projection of the viewing window 2 through the object point onto the modulation surface 3. The encoding regions are assigned to the respective object points 1a, 1b, 1c and 1d of the object, in which the object points 1a, 1b, 1c and 1d are holographically encoded in a sub-hologram 3a, 3b, 3c and 3d. Each sub-hologram 3a, 3b, 3c and 3d is therefore written, or encoded, in only one region of the modulation surface 3 of the spatial light modulator device. As can be seen from FIG. 1, depending on the position of the object points 1a, 1b, 1c and 1d, the individual sub-holograms 3a, 3b, 3c and 3d may overlap fully or only partially (i.e. only in certain regions) on the modulation surface 3. In order to encode, or write, a hologram for the object 1 to be reconstructed into the modulation surface 3 in this way, the procedure described above must be carried out with all object points of the object 1. The hologram is therefore constructed from a multiplicity of individual sub-holograms 3a, 3b, 3c, 3d, . . . 3.sub.n. The holograms computer-generated in this way in the spatial light modulator device are illuminated for reconstruction by the illumination unit 5 (only schematically illustrated) in conjunction with an optical system.

    [0097] With reference to FIG. 1, the individual sub-holograms 3a, 3b, 3c and 3d within the section of the hologram defined by the encoding regions have an essentially constant amplitude, the value of which is determined as a function of brightness and distance of the object points, and a phase which corresponds to a lens function, the focal length of the lens as well as the size of the encoding regions varying with the depth coordinate of the object point. Outside the section defined by the encoding regions, the amplitude of the individual sub-hologram has the value 0. The hologram is obtained by the complex-valued sum of all sub-holograms 3a, 3b, 3c, 3d . . . 3.sub.n.

    [0098] The illumination unit 5 can contain several specific modifications to be used preferably within a holographic display device. The illumination unit can be used for coherent light and for light which only shows reduced spatial and/or temporal coherence. Amplitude apodization and phase apodization can be used to optimize the intensity profile which propagates behind the entrance plane of the illumination unit 5. Color filters give the opportunity to optimize this for different colors separately. The specifications are dependent on the discrete embodiment.

    [0099] In the following, it will be described and explained the suppression of retinal inter object point crosstalk that reduces the image quality of the reconstructed scene or object point. This retinal inter object point crosstalk is caused during the holographic reconstruction of the three-dimensional scene or object.

    [0100] There is a plurality of parameters to be optimized in the display device in order to obtain a required image quality. One parameter to be considered is the diameter of the entrance pupil of the human eye. For this, a priori knowledge of the points spread function is used, which is close to the real situation that applies to an observer watching a holographic three-dimensional scene. Data obtained by using an eye tracking and eye detecting system, which detects the position of an eye of an observer at a defined position relating to the display device, can be used. The diameter of an entrance pupil of the eye of the observer depends on the luminance of the scene or object the observer is watching. Thus, values might be used that refer to the present luminance of the scene or the object. Furthermore, the pictures provided by the eye tracking and eye detecting system comprising at least one camera for recording the position of the observer and especially for recording the entrance pupil of the eye of the observer can also be used to extract a more exact value of the diameter of the entrance pupil of the eye of the observer.

    [0101] In principle, the eye of an observer might have an Airy shaped point spread function which is used to “pick up” the three-dimensional field emanating from an object. If the eye of the observer is focussed on an object point that is placed e.g. at 1 m, the point spread function of the object point placed at said 1 m and imaged on the retina of the eye is smaller than the point spread function of an object point e.g. placed at 0.8 m and smaller than the point spread function of an object point placed at 1.5 m. In other words, the object points the observer is focussing on are transferred to the retina of his eye with the smallest point spread function. However, object points out-of-focus or even only slightly out-of-focus have larger point spread functions as point spread functions of object points in-focus. Defocusing means widening the point spread function of the corresponding defocussed object plane.

    [0102] These “pick up and wave transfer” functions, i.e. the point spread functions of the plane on that is focussed, of the wave fields of all object points of an object have to pass the same entrance pupil of the eye of the observer. Due to fact that the adjacent object points of the object on which the observer is watching are very close to each other, the transfer wave fields emanating from these object points hit the entrance pupil of the eye of the observer at the same location or place and at approximately the same angle. Thus, the phase function of the entrance pupil of the eye which has to be considered is the same. In other words, there is a common path arrangement here. The complex-valued point spread functions of adjacent object points, which are picked up and transferred to the retina, are the same. Otherwise, for object points that are very far apart slightly different point spread functions have to be considered. For example, for the transfer of object points close to the optical axis of the display device, a narrower point spread function can be used as for object points at the edge of the image that are transferred with a slightly broader point spread functions.

    [0103] For minimizing the retinal inter object point crosstalk between adjacent object points of an object the following parameters should be modified: [0104] I) the relative phase emanating from the object point, [0105] II) the relative amplitude emanating from the object point, and [0106] III) the lateral position or distance of the adjacent object points to each other, which can be shifted slightly within the angular range of two adjacent diffraction orders. That is to say, a small phase wedge is used with which object points in a range of e.g. ± 1/60 degrees or ± 1/40 degrees can be shifted. Thus, it is differed slightly from an equidistant dot matrix.

    [0107] For optimizing the image quality of the reconstructed object or scene, the object or the scene is divided into individual depth planes before carrying out the holographic reconstruction. These values for the relative phase, the relative amplitude and for the lateral position have to be optimized for each single discrete depth plane, e.g. 128 depth planes, for a set of entrance pupil diameters as e.g. 2 mm, 2.2 mm, 2.4 mm, . . . 3.6 mm which are correlated with the luminance presented to the eye and for each primary color RGB (red, green, blue). Thus, a generated data set including optimized values for the relative phase, for the relative amplitude and for the lateral position can be saved in a look-up table (LUT). These generated data sets can be included in the calculation of the sub-holograms to be encoded in the spatial light modulator device.

    [0108] A first approach for determination of an assumable aperture of a pupil of an eye of an observer might use the average luminance to be able to choose the entrance pupil diameter which might be at least within the right range, e.g. for television 50-250 cd/m.sup.2, for a desktop monitor 100-300 cd/m.sup.2. The luminance intensity can be calculated from the image content. A second approach might use the data of an eye tracking system to measure the entrance pupil diameter and to choose the right data sub set of the look-up table.

    [0109] During the calculation of a sub-hologram corresponding to an object point as one possibility for optimizing the parameters above the average luminance can be used to choose the entrance pupil diameter of the eye which might be substantially within a required range, e.g. between 25 cd/m.sup.2 and 1000 cd/m.sup.2. Another possibility can be to use the obtained data of an eye tracking and detecting system. With these data the entrance pupil diameter can be measured and the required data subset of the look-up-table can be chosen, In other words, an image recorded by a camera of the eye tracking and detecting system in connection with the distance measurement can be used to determine the diameter of the pupil.

    [0110] A further possibility might be to use the distance of the entrance pupils of the eyes of an observer to define the rotation angle of the two optical axes of the eyes. In this way the point of intersection of the two optical axes which is in the focal distance of the eyes can be determined. For this an individual calibration for each observer might be required. This can be done by implementing a calibration routine which is processed by each observer once.

    [0111] However, only a limited set of parameters can be modified or adapted or altered.

    [0112] An example is the plurality of object points which might be real and thus in front of a display device. The eye of an observer might be focussed on this plane(s) of object points. The point spread function of the eye of the observer picks up these object points and transfers them to the retina of the eye of the observer.

    [0113] There are several options to proceed, where the options can be combined if necessary or required or suitable:

    [0114] 1)

    [0115] A single object point can be shifted virtually in his depth plane in such a way that the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “Is/total intensity distribution on the retina of the eye of the observer I(X,Y)_retina” is minimized, where I is the intensity distribution in the plane of the retina of an eye and x and y are the coordinates within the retina of the eye, which is referred to values of an x-axis and a y-axis. This can be done by introducing small offset phase functions in the calculation of the sub-holograms to be encoded into the spatial light modulator device, in the following also referred to as SLM. Shifts of object points within an angular range of a one-dimensional or two-dimensional viewing window provided in the observer plane are irrelevant for the present invention.

    [0116] 2)

    [0117] The relative phase or more precisely the mutual phase difference of the individual object points can be chosen in such a way that the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “is/total intensity distribution on the retina of the eye I(X,Y)_retina” is minimized. For this, the eye of an observer is included in the calculation process. The generation of the image is calculated on the retina. Thus, the retina is the reference plane. The starting point is a scene to be encoded. An iterative optimization of the image on the retina can be carried out. In a first step all sub-holograms can be added and propagated to the retina. Then, the deviation of the total intensity distribution on the retina to the target intensity distribution on the retina can be determined. The phase, the amplitude and the position can be changed. The deviation can be redetermined. This can be carried out by using an iterative loop. A threshold of deviation can be chosen as termination condition, e.g. if the deviation is smaller than 5%. It is also possible to limit the number of iterations.

    [0118] 3)

    [0119] The intensity or the amplitude of the individual object points can be chosen in such a way that the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “is/total intensity distribution on the retina of the eye I(X,Y)_retina” is minimized. For this, the eye of an observer is included in the calculation process. The generation of the image is calculated on the retina. Thus, the retina is the reference plane. The starting point is a scene to be encoded. An iterative optimization of the image on the retina can be carried out. In a first step all sub-holograms can be added and propagated to the retina. Then, the deviation of the total intensity distribution on the retina to the target intensity distribution on the retina can be determined. The phase, the amplitude and the position can be changed. The deviation can be redetermined. This can be carried out by using an iterative loop. A threshold of deviation can be chosen as termination condition, e.g. if the deviation is smaller than 5%. It is also possible to limit the number of iterations.

    [0120] 4)

    [0121] For reasonable large object points, which may be e.g. as large as 50% of the point spread functions which pick up the object points and transfer them to the retina of the eye of the observer, the object point can be modified in such a way that the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “Is/total intensity distribution on the retina of the eye I(X,Y)_retina” is minimized. This can be done e.g. by using apodized sub-holograms representing the object points which are provided within the plane that is picked up by the point spread function of the eye. All object points the observer is watching are generated by the SLM. Thus, the complex-valued distribution present in the sub-holograms of the SLM can be used in order to generate point spread functions with reduced side lobes. This can be carried out by using apodized sub-holograms, which are able of generating point spread functions at the retina of the eye of the observer. The point spread functions should be no Airy distributions but e.g. Gauss distributions that do not have any side lobes.

    [0122] Side lobes in the intensity distributions generated by the object points can be suppressed or even formed in a way to minimize the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “is/total intensity distribution on the retina of the eye I(X,Y)_retina”. Side lobes can also be increased to do so. Side lobe shape variation is used as a further parameter variation, which can reduce the difference of the total intensity distribution to the target intensity distribution on the retina of the eye of the observer I(X,Y)_retina.

    [0123] Such procedure may work more efficiently for reasonable large object points of the object or scene. The changes in the difference of the “should be/target intensity distribution on the retina of the eye of the observer I(X,Y)_retina” and the “Is/total intensity distribution on the retina of the eye I(X,Y)_retina” may not be very efficient if very small object points and thus large sub-holograms are used.

    [0124] The sub-hologram apodization can be an a(x,y)_SLM (Amplitude-SLM) and a phase(x,y)_SLM (Phase-SLM) too, which result in a c(x,y)_SLM (complex-valued SLM). Thus, the apodization used within the SLM plane can be complex-valued.

    [0125] 5)

    [0126] For a two-dimensional (2D) encoding it is possible to shape the object points by using a modified shape of the sub-holograms used. The adapted shape of the sub-holograms is related to the complex-valued SLM c(x,y)_SLM, which e.g. uses a fixed round or quadratic/square shape only. For example, it can also be used hexagonal sub-holograms or sub-holograms that are slightly changed in the aspect ratio. In general, the complex-valued distribution can be varied. The parameters used may be dependent on the content of the three-dimensional scene. This means that the complex-valued distribution of the apodization of the sub-holograms may be changed in regard to the change of the content. In other words, the distribution of phase and amplitude of the individual sub-holograms can be varied.

    [0127] 6)

    [0128] If it is not possible to realize an overall optimization of the reconstructed object or scene, which includes e.g. all z-planes, where z is the longitudinal distance parallel to the optical axis of the display device, then vergence (gaze) tracking can be used to define the depth plane of interest. For this, it is determined what does the observer look at or gaze at. The eye tracking and detecting system can determine that look or gaze so that the look of the observer can be defined. Thus, the results for the encoding of the sub-holograms into the SLM can be optimized in regard to the z-plane or to the range of z-planes the observer is watching.

    [0129] The options explained under 1) to 6) can be combined with each other to achieve a good or required high quality.

    [0130] Although these options mentioned before can be combined, the most direct way or the more practical way is to use a fixed grid of point spread functions PSF.sub.ij and to optimize the side lobes, the relative phase difference and the intensity of the point spread functions PSF.sub.ij in order to get a reconstructed retinal image, that is reasonable close to the designed retinal image of the three-dimensional object or scene. The suffixes ij regarding the point spread function PSF.sub.ij are indices indicating points of a two-dimensional grid, preferable a virtual grid, placed at the two-dimensional, spherical curved detector plane or surface of the retina.

    [0131] In the following the present invention is described for one-dimensional (1D) encoded holograms in an SLM:

    [0132] In general, the options 1) to 6) described above can be used additionally to the following options for one-dimensional encoded holograms. Thus, the side lobe suppression, the retinal inter object point crosstalk reduction and the optimization in regard to the image quality can further be enhanced. The following explanations refer to one dimension only. The optimization of the retinal image in only one dimension, which means to analyse and optimize the nearest neighbours of the point spread function PSF.sub.ji in only one dimension, can be realized faster than optimizing neighboured point spread function PSF.sub.ij in two dimensions. For this reason, an e.g. iterative optimization or analytic optimization can be carried out in real time. This is fast and efficient enough for active user interaction as in gaming too.

    [0133] Using the limited angular resolution of the human eye, i.e. of an eye of an observer, is one option that can be used for one-dimensional encoded holograms in an SLM. For that, several one-dimensional encoded lines of object points, which are incoherent to each other and which are seen as one encoded line, are provided. Thus, the pixel density of the incoherent direction on the SLM is increased. Each one-dimensional encoded line generates e.g. ⅓th of the object points which are presented to the observer at 1/60 degrees. A pixel density of e.g. up to 180 pixels per degree or less is used within the incoherent direction to reduce the crosstalk between adjacent object points which may be seen by the observer.

    [0134] By way of example, the angular resolution of the human eye, which is 1/60 degrees in best case conditions, is equivalent to a lateral extension of objects points that can be resolved. At an average viewing distance of 3.5 m to the display device, which may be assumed generally for a television (TV), 1/60 degrees is equivalent to 1.02 mm lateral extension of two objects points to each other. Although the real resolution is significant less, a periodic interval of for instance 1.2 mm may be used as resolution limit for television applications. Real resolution mean in this context that the luminance is not provided for the best case situation or that individual aberrations of the observer eye may reduce the effective resolution obtained. This value of 1.2 mm was chosen here just to make the example as simple as possible. If a vertical holographic encoding is used, which means vertical parallax only (VPO), the sub-holograms are arranged as vertical stripes on the SLM.

    [0135] Color filters can be used to reduce the frame rate mandatory for the SLM providing the complex-modulated wave field. As generally known absorptive type dye based filter arrays can be used for that, which are structured aligned to the SLM pixels. Modern coating technology makes it possible to apply notch filters e.g. in a striped arrangement too. This means that a color stripe can reflect two of the primary colors RGB while transmitting the remaining primary color. This can be done with a transmission coefficient greater than 0.9, while reflecting the two non-required wavelengths of this specific stripe with a coefficient close to 1.

    [0136] For example, it can be assumed to provide three color filter stripes within a horizontal width of 1.2 mm, which is reasonable close to the best case resolution limit of the human eye ( 1/60 degrees) at 3.5 m viewing distance as explained above.

    [0137] In the prior art it is known to use three color filter stripes within this width of 1.2 mm. Thus, there are three RGB color filter stripes with a width of 400 μm each. The red, the green and the blue color filter stripe have hence a width of 400 μm each.

    [0138] According to FIG. 3, the density of the vertical stripes is increased much further. The density of the vertical stripes is e.g. two times, three times (3×) or four times (4×) higher than the density according to the prior art. Now, there are two, three or even four pairs or RGB color filter stripes within this exemplary width of 1.2 mm. This means that there are color stripes with a width of 133.3 μm or 100 μm only.

    [0139] A condition for holographic display devices, which use diffractive components with e.g. a 40 degrees overall accumulated diffraction angle, is a line width of <0.1 nm of a light source of an illumination unit. Furthermore, anti-reflection coatings used, which, for example, can be applied to transparent surfaces of a backlight of the illumination unit, at grazing incidence of light and spectral selectivity of Bragg diffraction-based volume gratings used in the display device provide a stability of the center wave length of 0.1 nm of the light source. This can be achieved e.g. with diode pumped solid state (DPSS) lasers as light sources, which are e.g. available at 447 nm, 457 nm, 532 nm, 638 nm and 650 nm at an optical power of >500 mW each. Furthermore, light sources as distributed feedback (DFB) laser diodes, which have a Bragg resonator grating within the active medium or reasonable close to that medium, or wavelength stabilized laser diodes, which make use of external Bragg resonators, can also fulfill these requirements.

    [0140] If the switching time of the light source, e.g. laser diodes, has to be reduced, e.g. to 1 ms, for any reasons, additional mechanical shutter or temporal synchronized color filter wheels, which are known from projectors, may be used in the illumination unit. Distributed feedback laser diodes show reasonable fast switching and can be made with different design wavelengths. Furthermore, so called Q-switched laser arrangements can be used in combination with wavelength stabilizing Bragg resonator approaches. This shows that practically available laser light sources can be used for the display device according to the invention.

    [0141] At a 3.5 m distance from a viewing window in an observer plane to the display device a vertical viewing window of 8 mm would require a pixel size of 195.6 μm on the SLM. This means an approximately pixel size of 200 μm. Thus, the vertical pixel pitch is larger than the horizontal pixel pitch.

    [0142] If it would be used an average viewing distance of 1.5 m only instead of 3.5 m from a viewing window in an observer plane to the display device the given numbers of the pixel dimensions has to be divided by a factor of 2.3. This could be the case in some cases where required. For holographic 1D encoded 3D television applications a 3.5 m distance is, however, more reasonable.

    [0143] FIG. 3 shows a part of an SLM in the front view. The SLM is provided with a separator for separating adjacent point spread functions in an eye of an observer generated by the sub-holograms of adjacent object points such that the adjacent point spread functions are mutual incoherent to each other. The separator is designed as a color filter arrangement here, preferably a primary color (RGB) filter arrangement. Such a color filter arrangement is provided mainly for a three times high definition (HD) oversampled 1D encoded holographic 3D television display device but could also be provided for a two-dimensional (2D) encoded holographic 3D television display device. Such a display device is designed for an average observer viewing distance of z.sub.mean=3.5 m to the display device. At this viewing distance the horizontal extension of the color filter arrangement of 1.2 mm as shown in FIG. 3 is equivalent to 1/60 degrees, which is the angular resolution of the human eye. In this embodiment of the separator or of the color filter arrangement for each primary color RGB (red, green, blue) three striped color filters r1, g1, b1, r2, g2, b2, r3, g3, b3 are provided and assigned to the part having a horizontal dimension of 1.2 mm of the SLM. In other words, each part having a horizontal dimension of 1.2 mm of the SLM is provided with a color filter arrangement comprising three striped color filters r1, g1, b1, r2, g2, b2, r3, g3, b3 for each primary color RGB. This means, nine striped color filters are provided within the horizontal angular range of 1/60 degrees. The reference signs r1, r2 and r3 denote the red color filter stripes, the reference signs g1, g2 and g3 denote the green color filter stripes and the reference signs b1, b2 and b3 denote the blue color filter stripes. In FIG. 3, different filling pattern mark the color filter stripes of the three different primary colors RGB.

    [0144] Of course, it is also possible to arrange the color filter arrangement in the horizontal direction if the encoding direction lies in the horizontal direction.

    [0145] A schematic representation of object points reconstructed by the part of the SLM shown in FIG. 3 is shown in FIG. 4. For explanation seven object points are used.

    [0146] FIG. 4 A) shows the reconstruction of seven white object points OP of an object at a vertical angular distance of 1/60° deg. The shown circle each marks the first minimum of the intensity distribution of the diffraction pattern of the point spread function present on the retina of an eye of an observer. For the sake of simplicity a circular shape of the object points OP is assumed here. That is only for illustration of this aspect. However, such a circular shape of the object point OP could not be quite correct for one-dimensional encoded holograms, which are identified with the term vertical parallax only.

    [0147] FIG. 4 B) shows the reconstruction of seven red object points at a vertical angular distance of 1/60 degrees. These seven red object points form the red subset of the white object points according to FIG. 4 A). As shown, the red subset includes all parts that are generated by the color filter stripes r1, r2 and r3.

    [0148] FIG. 4 C) shows the reconstruction of the part of the red subset that is only generated by the color filter stripe r1. In other words, the color filter stripe r1 generate the red subset of the white object points OP for the first, fourth, seventh, tenth, . . . object point OP according to 4 A). Here it can be seen that the color filter stripe r1 generates red object points, here three red object points that do not superpose.

    [0149] FIG. 4 D) shows the reconstruction of the part of the red subset that is only generated by the color filter stripe r2. In other words, the color filter stripe r2 generate the red subset of the white object points OP for the second, fifth, eighth, eleventh, . . . object point OP according to 4 A). Also, the color filter stripe r2 generates red object points, here two object points, that do not superpose. The object points generated by the color filter stripe r2 are reconstructed with an offset of half the circle to the object points generated by the color filter stripe r1.

    [0150] FIG. 4 E) shows reconstruction of the part of the red subset that is only generated by the color filter stripe r3. In other words, the color filter stripe r3 generate the red subset of the white object points OP for the third, sixth, ninth, twelfth, . . . object point OP according to 4 A). Also, the color filter stripe r3 generates red object points, here two object points, that do not superpose. The object points generated by the color filter stripe r3 are reconstructed with an offset of half the circle to the object points generated by the color filter stripe r2.

    [0151] The procedure according to the color red is applied to the other primary colors green and blue accordingly.

    [0152] As a consequence, seven white object points are reconstructed by using three primary colors RGB with three laterally displaced color filter stripes, which are allocated to each primary color RGB. Within the horizontal angular range of 1/60 degrees there are provided the vertical color filter stripes denoted by r1, g1, b1, r2, g2, b2, r3, g3 and b3 as can be seen in FIG. 3. For illuminating the SLM having the separator, which is here designed as a color filter stripes arrangement, tailored horizontally incoherent light is used. The spatial coherence of the light used can be e.g. >0.9 along the vertical direction, which is the encoding direction of the sub-holograms. The longitudinal extension of reasonable high coherence, which means close to 1, can be e.g. 5 mm, or 5 mm to 10 mm.

    [0153] It is important to prepare the mutual coherence of mutual columns of the SLM used for the 1D encoding in a way that adjacent columns are mutually incoherent to each other. This can be done by using a stripe-like light source arrangement in the illumination unit.

    [0154] As can be seen from FIG. 4, a single line or part of a one-dimensional (1D) encoded holographic display device is divided into three different colors and into additional subsets, which refer to the single primary colors RGB.

    [0155] As can be seen further from FIG. 4, there is no overlap or superposition between the object points reconstructed from a coherent subset. The circles each show the first minimum of the diffraction pattern of the object point reconstructed, which means the first minimum of the retinal point spread function. Correctly, additional cycles have to be shown or provided, which show the outer side lobes of the point spread function. However, for the sake of clarity only the first minima or the first side lobes are shown in FIG. 4.

    [0156] In general, no superposition of the individual circles means that sufficient separation of the adjacent point spread functions on the retina of the eye of an observer is provided. However, there might be a small portion of light, which still superposed with two adjacent coherent object points reconstructed. But that has no significant effect on the quality of the reconstructed scene or objects. In addition, these small values of residual errors of the target intensity distribution to be obtained on the retina of the eye of the observer can be considered and used in an optimization algorithm of the optimization process, which approximates the detected retinal image to the target retinal image meaning without recognizable retinal inter object point crosstalk. The algorithm refers to a target/actual comparison and an iterative variation of parameters. Further optimization of the retinal image for avoiding of retinal inter object point crosstalk can be provided by applying e.g. individual or all of the options described and explained above under items 1) to 6).

    [0157] The described SLM comprising a separator designed as a color filter stripes arrangement is illuminated by the illumination unit having at least one light source emitting an angular spectrum of plane waves of e.g. 0.5 degrees to 1 degree in the horizontal direction. Such an angular spectrum of plane waves is sufficient to span a horizontal sweet spot in an observer plane if the coherent direction is the vertical direction and vice versa. The angular spectrum of plane waves is preferable significantly smaller than 1/60 degrees, e.g. 1/120 degrees only, along the vertical direction, which is the direction of the encoding of the sub-hologram of the one-dimensional (1D) encoded holographic display device for the reconstruction of three-dimensional scenes or objects.

    [0158] An encoding unit or computation unit provided in the display device splits the content, preferable the high definition (HD) content, of the object point into the subsets accordingly to FIG. 4. Thus, FIG. 4 also shows the instruction for the reassembling of the content to be encoded. Each third point or even each fourth point for a four color filter stripes arrangement within an angular spectrum of plane waves of ≤ 1/60 degrees of a one-dimensional (1D) vertical line on the SLM is assigned to another sub-color filter line of the part of the SLM shown in FIG. 3. This can be simply transferred to a block diagram of an electronic circuit providing a fast reallocation of the individual sub-holograms, which are generated by defined object points in the three-dimensional (3D) space.

    [0159] The embodiment schematically shown in FIG. 4 describes the situation of a spatial displacement of object points, which are mutually coherent, in the case the observer is focussing on the object points.

    [0160] FIG. 5 shows a retinal positioning of focussed and non-focussed object points of an object or a scene in the eye of an observer. As can be seen, the energy of non-focussed object points is spread out and will thus generate a retinal background. There is the highest relative blur if the observer looks at the foreground of the three-dimensional (3D) scene. There is the lowest relative blur if the observer looks at the background of the three-dimensional (3D) scene, which means at holographic reconstructed object points that are far away, as e.g. several meters behind the holographic display plane, which is the plane where the computer-generated hologram (CGH) of the sub-holograms are positioned.

    [0161] FIG. 5 A shows the most relaxed situation. The retinal background, which arises from sub-holograms with the largest negative values of the focal lengths of object points, is widely spread. But this background can be coherently superimposed on the object points the observer is looking at or is focussing. In other words, the observer is looking at the small circle approximately between the eye and the CGH. Therefore, the image of the circle is imaged exactly on the retina of the eye. The non-focussed object points, here illustrated as a rectangle and a star, are also imaged into the eye, but they have not their focus on the retina of the eye. The object point illustrated as rectangle is far behind the display device illustrated as CGH and will thus only result in a widely spread background as can be seen on the right-hand side of FIG. 5.

    [0162] FIG. 5 B shows the situation where the observer is looking at the star which is provided in the plane of the CGH. The non-focussed object points, illustrated as the rectangle and the star, are also imaged into the eye and behind the eye, but they have not their focus on the retina of the eye.

    [0163] FIG. 5 C shows the situation where the observer is looking at the rectangle which is provided behind the plane of the CGH. The non-focussed object points, illustrated as the circle and the star, are imaged behind the retina of the eye, so they have not their focus on the retina of the eye.

    [0164] With providing of a color filter stripes arrangement as a separator on the SLM the mutual coherence between adjacent color filter stripes of the color filter stripes arrangement can be eliminated. For this, a spatial extended light source can be used in the illumination unit. The aspect ratio of the light source to be collimated can be e.g, 1:60. In this manner, there is no coherence in the horizontal direction (no encoding direction). Thus, coherent superposition of adjacent color filter stripes and disturbing the image quality caused in this way can be prevented.

    [0165] According to the invention, the additional vertical separation introduced by using additional color filter stripes in addition to one set of color filter stripes (only comprising one red stripe, one green stripe and one blue stripe) and thus the higher pixel count eliminate the mutual coherence between object points, which are neighbours along the vertical direction. This effects an additional reduction of the mutual coherence and thus a further reduction of the retinal inter-object point crosstalk.

    [0166] However, the coherence of inner axial object points still exists. The expression “coherence of inner axial object points” refers to the coherence of object points sharing a common overlap region of their sub-holograms, encoded as one-dimensional (1D) lens line segments. This means that it has not to be dealt with all the other object point crosstalk anymore, despite the crosstalk generated by object points referring to a single color filter, where the object points are positioned behind each other, which means along the z-direction parallel to the optical axis of the display device, and are positioned adjacent to each other, which means in a plane that is perpendicular to the z-axis, in an out-of-focus situation. This means in the situation the observer is looking at a different plane and the plane, which is considered here, is not in focus.

    [0167] The optimization described above has to be applied to a reduced number of defined object points only. This means, for the color filter stripes arrangement and for a one-dimensional encoding of holograms the optimization is only carried out in one dimension and, for example, only for 3 to 4 neighboring object points.

    [0168] FIG. 5 also contains the concept of generating a weighting matrix. Such a weighting matrix can be used for the optimization of e.g. phase values given to different object points. In case of FIG. 5 A) the object point far behind the display device and illustrated as rectangle only results in a widely spread background on the retina and might be thus ignored in a first order approach.

    [0169] In case of FIG. 5 C) the relative phase values of all inline and close to inline objet points relating to the same color filter stripe along the extension of the spatial coherence, which is e.g. 5 mm vertically, have to be optimized accordingly to each other, since the three axial object points are close to each other. Inline means here, for example, within 1/60 degrees three staggered lines are seen as only one line. One option is to set the overlapping sub-holograms of inline object points relating to the same coherent region of a single color filter stripe to the same phase values. However, in general, the phase values, as one parameter only, have to be optimized in regard to the image content displayed. This also includes the relative intensities of overlapped sharp or blurred coherent object points.

    [0170] The following explanations refer to the illumination unit comprising at least one light source which can be used for a one-dimensional encoding of holograms. The coherence of the light emitted by at least one light source has to be as low as possible but as high as requested for a holographic encoding. A tracking angle to be introduced for tracking a viewing window in an observer plane according to a movement of an observer and additional diffractive optical elements provided in the display device introduce an optical path difference within a region based on the extension of a sub-hologram. Therefore, the line width of the light source designed e.g. as a laser light source to be used has to be s 0.1 nm. In addition to the optical path difference introduced, an increased line width would also introduce a smearing in the reconstruction. The smearing may be due to the diffractive dispersion introduced by the diffractive optical elements used in the display device. In the process all effects sum up.

    [0171] The line width of the light source of the illumination unit, which has to be ≤0.1 nm, is only one aspect of the coherence. Another aspect is the extension of the spatial coherence or more explicit the absolute value of mutual coherence. The mutual coherence between adjacent color filter stripes can be eliminated as disclosed above while sufficient coherence of the light, e.g. >0.8, can be provided along the direction of the color filter stripes, i.e. along the encoding direction. Additionally, the mutual coherence region, which is tailored to be a one-dimensional line-like segment orientated in parallel to the color filter stripe(s), is limited to a maximum extension according to the size of the largest sub-hologram.

    [0172] For specifying the maximum of the optical path difference and thus the line width of the light source used or the maximum extent of the mutual coherence not the entire size of the viewing window and its projection onto the SLM, which can be used to define the size of the sub-hologram, has to be considered. It is better to consider only the entrance pupil of the human eye or of the eye of the observer. The entrance pupil of the eye can be used to specify the maximum of the optical path difference and thus the line width of the light source used or the maximum extent of the mutual coherence in order to obtain the required coherence parameters having the lowest coherence properties.

    [0173] The reducing of the coherence of the light used is a basic requirement to provide high image contrast and the intended retinal image without disturbing effect in other words, it is important to reduce the coherence of the light in such a way that reasonable high coherence as required is provided in order to prevent unintentional coherent crosstalk. Further, the complex-valued point spread functions of the entire system, which includes the illumination unit, the SLM and the retina of the eye of an observer, i.e. the complete display device in connection with the eye of the observer, has to be optimized too.

    [0174] In the following the present invention Is described for two-dimensional (2D) encoded holograms in an SLM, which in detail use procedures for reduction of the retinal inter object point crosstalk:

    [0175] The relation to a two-dimensional (2D) encoding of holograms has several aspects. The general requirements of optimizing the point spread functions in relation to the final design intensity distribution or to the target intensity distribution of the perfect image detected by the retina of the eye of an observer is already described and explained above to the one-dimensional encoded holograms.

    [0176] The generation of independent and mutual incoherent subsets of the three-dimensional (3D) object representing wave fields, which is already described for one-dimensional (1D) encoded holograms, can also be applied to two-dimensional (2D) encoded holograms. In other words, a separator designed as a color filter arrangement can also be applied to two-dimensional encoded holograms. The color filter arrangement has to be adapted to the SLM used, in which the holograms are encoded in two coherent directions. For example, it can be used a Bayer color filter array or Bayer pattern as color filter arrangement.

    [0177] For reducing of crosstalk between adjacent point spread functions on the retina of the eye of an observer, for example, it can be used a standard pixel aperture of a pixel of the SLM, which is e.g. 33 μm×33 μm for a two-dimensional encoded three-dimensional holographic display device used at a viewing distance of 600 mm. For the sake of simplicity, a rectangular shaped pixel aperture of a pixel can be assumed. Furthermore, apodization profiles can be applied, as e.g. a Gauss-type amplitude apodization or a so-called Kaiser-Bessel-Window.

    [0178] By way of example, it is assumed to use an SLM having rectangular shaped apertures of pixels. This is illustrated in FIG. 6, where 10×10 pixels are shown. The fill factor FF of the SLM is approximately FF=0.9, which is an idealized value. Such a fill factor might only be realized e.g. by a reflective-type SLM as e.g. LCoS (liquid crystal on silicon) but not by transmissive-type SLM with a pixel pitch of 33 μm.

    [0179] FIG. 7 shows the intensity distribution of the Fourier transformation of the intensity distribution shown in FIG. 6 representing the amplitude distribution of an SLM plane. The central spot is equivalent to the 0.sup.th diffraction order of a diffraction pattern of the SLM within the plane of its Fourier transformation, which is the plane of the viewing window or the observer plane. Due to the very high fill factor of FF=0.9 assumed for the SLM, there are probably no visible higher diffraction orders than the central zero diffraction order spot. For this calculation a constant phase of the SLM is assumed. In other words, phase change introduced by the encoding will definitely lead to significantly increased intensity values of the higher diffraction orders, which are present within the plane of the eye of the observer. A fill factor of FF=1 will not eliminate completely the intensity in the higher diffraction orders, i.e. an intensity of diffraction orders with a diffraction order index m larger than 0 or in two directions with m.sub.x and m.sub.y>1. The higher diffraction orders will be present if no constant phase distributions are written into the SLM. But in general, although the values of the higher diffraction order peaks will be changed with the encoded content displayed on the SLM, a higher fill factor will cause less intensity in the higher diffraction orders than a smaller fill factor. However, it is assumed a constant phase for the generic layout of the SLM described herein.

    [0180] As disclosed above, it is assumed an SLM having rectangular shaped apertures of the pixels. However, the pixels are for example now non-quadratic and have a ratio of the width to the height of 1 to 2. Such an SLM is shown in FIG. 8, where 10×10 pixels are illustrated. The fill factor FF of the SLM is approximately FF=0.5, i.e. it can also be a little bit smaller than 0.5, e.g. 0.45 only. But in order to keep said embodiment simple and comprehensible, it can be assumed a fill factor of 0.5 here. The pixel pitch is e.g. 33 μm in both directions, i.e. horizontal and vertical. The height of a pixel of the SLM is close to 33 μm while the width of said pixel is close to 16 μm only. Only the left half of the pixel apertures of the SLM is used in this embodiment.

    [0181] FIG. 9 shows the intensity distribution of the Fourier transformation of the intensity distribution shown in FIG. 8 in the plane of the eye of an observer. The central peak is the intensity of the 0.sup.th diffraction order. The larger fill factor of the SLM in the y-direction, i.e. in the horizontal direction, leads to reduced side lobes along the y-direction in the plane of the eye of the observers, which is the plane of the viewing window or the observer plane. Thus, it is preferred to use the larger fill factor along the horizontal direction of the SLM. The intensity distribution shown in FIG. 8 is equivalent to the intensity distribution of the viewing window plane in the case of encoding an empty hologram, i.e. to use a constant phase value in the SLM plane and the same amplitude for all pixels of the SLM. Compared to FIG. 6 and its Fourier transformation shown in FIG. 7 the decreased horizontal width of the pixels leads to increased ±1.sup.st horizontal diffraction orders of the SLM in its Fourier transformation plane, which is the plane of the viewing window within which the eye of the observer is provided. For this embodiment shown in FIGS. 8 and 9, the horizontal diffraction orders larger than m=±3 show intensities which are small enough so that they will not disturb the viewing experience in the neighboured eye. Here, no significant horizontal ±4.sup.th diffraction orders exist.

    [0182] By way of example, if it is assumed a wavelength for the blue primary color of λ=450 nm, a focal length of the volume grating based field lens used within a desktop-type holographic three-dimensional (3D) display device of f=600 mm and a pixel pitch of 33 μm, a viewing window in an observer plane formed by the blue light has then an extension of approximately 8 mm times 8 mm. The 3.sup.rd diffraction order is provided at approximately 24 mm from the zero diffraction order spot. For a wavelength of λ=650 nm, being assumed for the read primary color, the 3.sup.rd diffraction order is provided at approximately 35 mm from the zero diffraction order spot. This means that for an average distance of the two eyes of an observer of 65 mm, a distance of 35 mm is sufficient.

    [0183] FIG. 10 shows a binary amplitude transmission of an SLM having rectangular shaped apertures of the pixels and a fill factor of approximately 0.5. Here, 10×10 pixels are shown again. The embodiment shown in FIG. 10 is the equivalent of only using the left half of the pixel apertures, which is shown in FIG. 6, or using the areas not used in the distribution shown in FIG. 8. That is to say, according to FIG. 10 only the left half of the pixel apertures is used. Of importance is here the fact that the initial situation of the pixels shown in FIG. 6 is used and two subsets out of this initial SLM are generated. A right subset is shown by the SLM of FIG. 8 and a left subset is shown by the SLM of FIG. 10.

    [0184] The right subset of the initial SLM shown in FIG. 6, which is illustrated in FIG. 8, and the left subset of the initial SLM, which is illustrated by the SLM shown in FIG. 10, generate equivalent intensity distributions in the Fourier plane of the SLM. In other words, the intensity distribution of the Fourier plane of the amplitude distribution shown in FIG. 8 and the amplitude distribution shown in FIG. 10 are the same and will be as shown in FIG. 9, if a constant phase is used in the SLM. At this point it is not of relevance that the phase of both Fourier transformations is different. This has only to be considered if the two subsets of the SLM of FIGS. 7 and 9 are superimposed coherently. However according to the invention, it is used an incoherent superposition of the two subsets of the SLM shown in the FIGS. 7 and 9 as amplitude distributions.

    [0185] Different types of subsets of the SLM can be used in order to generate incoherent subsets of wave fields representing the three-dimensional (3D) holographic object to be displayed to the observer. For generating of incoherent subsets of wave fields a separator can be used. As separator a color filter stripes arrangement providing spatial separated colors, an arrangement of patterned retarders providing spatial separated orthogonal polarization states or a light source arrangement in the illumination unit providing spatial separated allocation of the wave field illuminating the SLM can be used.

    [0186] The physical 50% addressing of the SLM is used. For the sake of simplicity and for a simple explanation of the present invention it is concentrated on simple embodiments of the present invention only. Simple embodiments mean to use only the simple subsets of an SLM, i.e. e.g. to use the two simple subsets of the SLM shown in FIG. 6, which are shown in the FIGS. 8 and 10.

    [0187] If the fill factor FF is much smaller than shown in FIG. 10, it is preferred to subdivide a primary square shaped pixel of e.g. 33 μm×33 μm into two subsets, which are obtained by using an upper and a lower part of the pixel instead of using the right and the left part of the pixel. Thus, it might be preferred to implement a ratio of the width to the height of the pixel of 2 to 1. Higher diffraction orders of the SLM will then be dominant along the vertical direction and not along the horizontal direction, which reduces potential crosstalk between the content displayed to the left eye and the right eye of the observer. The probability of using this embodiment is increased if the critical dimension, which is the smallest structural dimension of the implemented layout of the SLM, of the manufacturing process of the SLM is e.g. 5 μm only. A critical dimension of 3 μm will lead to a larger fill factor. Therefore, it is preferred to use a critical dimension of e.g. 5 μm only.

    [0188] The following describes an embodiment of an SLM provided with a separator, which is designed as an arrangement of patterned retarders. An arrangement of patterned retarders is used for transforming light incident on the SLM and having an initial polarization state, which might be e.g. a linear polarization state, into two patterned subsets of the light. The two patterned subsets of the light have an orthogonal polarization state. For example, the primary e.g. quadratic/square shaped pixel aperture, as can be seen e.g. FIG. 6, is divided into two parts. This means that the initial pixel count and, therefore, also the initial pixel density is doubled. The two pixel subsets of all pixels of the SLM, as can be seen e.g. in FIGS. 8 and 10, are provided with an arrangement of patterned retarders. A first subset of a pixel is provided with e.g. a +π/4 patterned retarder and a second subset of the pixel is provided with e.g. a −π/4 patterned retarder. If the SLM comprising these two subsets of a pixel is illuminated with linear polarized light at the exit plane of the SLM two orthogonal polarized wave fields will then exist, which refer to the two SLM subsets carrying different patterned retarders.

    [0189] In the following section it is described if one, two or several light sources per color are provided. If classic optics or in general non-polarization-selective optics would be used to form the plane of the viewing window, then the described embodiments in order to generate two spatial interlaced subsets of the wave field representing the three-dimensional object to be presented to the observer can be used. Adjacent object points imaged on the retina of an eye of an observer show orthogonal polarization states and interfere thus in the same way as mutual incoherent points or in more detail as retinal point spread functions. In other words, along one direction there is no coherence. Thus, there is no coherent retinal inter object point crosstalk between adjacent object points, which are adjacent point spread functions on the retina of the eye of the observer, along one direction.

    [0190] However, if the optical elements following the SLM within the beam path are polarization selective or require only a single polarization state, a different way has to be used in order to implement two mutually incoherent wave fields. In this case a common exit polarization state has to be used. This means that no mutual incoherence would exist if a single primary light source is used.

    [0191] Per primary color at least two mutually incoherent light sources should be used, which illuminate the SLM. The SLM comprises e.g. a separator designed as an arrangement of patterned retarders. The arrangement of patterned retarders is assigned to the pixels of the SLM. Preferably, the arrangement of patterned retarders is designed as an arrangement of patterned polarization filters assigned to the at least two defined parts of the pixels, especially to the two subsets of the pixel apertures of the SLM.

    [0192] For example, it can be used a wedge-type illumination unit, which is optimized in order to accept two orthogonally polarized wave fields. One wave field comes from a first light source of the illumination unit. This light can be e.g. TE (transverse electric) polarized. Another wave field comes from a second light source of the illumination unit. This light can be e.g. TM (transverse magnetic) polarized. Finally, the SLM is illuminated with both wave fields.

    [0193] FIG. 11 illustrates the embodiment of a two-dimensional wire grid polarizer, which can be implemented as one of the two mirrors used at the ends of the resonator of a laser diode as light source. The pattern shown can be realized by generating two crossed highly reflective one-dimensional (1D) wire grid structures. The period of this special wire grid-type polarizer is smaller than π/2π, where π is the wavelength of the light source, e.g. the laser diodes, and n is the refractive index of the substrate/structure of the polarizer. Two linear orthogonal polarization states have a maximum reflectivity of close to 1. A metallic two-dimensional striped wire grid polarizer structure can be enhanced in its reflectivity by adding a dielectric layer stack. For example, the in FIG. 11 shown wire grid polarizer structure or different mirror versions can be used at the end of a light source cavity in order to provide e.g. two orthogonal linear exit polarization states out of the SLM. By adding e.g. a Bragg-type resonator mirror to the illumination unit wavelength stabilization can be implemented too. Thus, a line width of the light source of e.g. 0.1 nm can be combined with a stable wavelength, which shifts e.g. about less than 0.1 nm during operation of the display device. This structure can be further combined or can be further developed to obtain two orthogonal polarized exit beams out of the SLM, which are mutually incoherent. This means it can be realized a cost-efficient single light source, e.g. a laser diode-type light source, which can be used in the display device according to the present invention.

    [0194] For other applications e.g. three or more mutually incoherent exit beams out of an SLM can be generated. These exit beams are linear polarized.

    [0195] In FIG. 12 a binary amplitude transmission of an SLM is shown. The SLM comprises rectangular shaped pixel apertures and a fill factor of approximately FF=0.5. Here, 10×10 pixels are shown again, as example. The fill factor is the same as the fill factor of the SLM shown in FIG. 8. A separator designed as an arrangement of patterned retarders, preferably a patterned polarization filter, is assigned to the pixel of the SLM, particularly to the apertures of the pixel of the SLM. The patterned polarization filter allows the transmission of a horizontal orientated electrical field. Here, only one patterned polarization filter is required, which can be assigned to all pixels of the SLM.

    [0196] FIG. 13 shows a binary amplitude transmission of an SLM having rectangular shaped pixel apertures and a fill factor of approximately FF=0.5. Here, 10×10 pixels are shown again, as example. The fill factor is the same as the fill factor of the SLM shown in FIG. 10. A separator designed as an arrangement of patterned retarders, preferably a patterned polarization filter, is assigned to the pixel of the SLM, particularly to the apertures of the pixel of the SLM. The patterned polarization filter allows the transmission of a vertical orientated electrical field. Here, only one patterned polarization filter is required, which can be assigned to all pixels of the SLM.

    [0197] A nested arrangement of two subsets of a pixel of an SLM is shown in FIG. 14. Two adjacent subsets in a row of the SLM each generate orthogonal exit polarizations of the light out of the SLM. This means that two adjacent columns of the SLM generate orthogonal exit polarizations of the light out of the SLM. This embodiment shown in FIG. 14 is a combination of the embodiments shown in the FIGS. 12 and 13. Only one patterned polarization filter according to the patterned filter shown in FIGS. 12 and 13 cannot be used for this embodiment of the SLM. Therefore, a patterned polarization filter has to be used that comprises nested polarization segments assigned to the individual pixels or individual columns of the SLM. Or, it has to be used two patterned polarization filters that are arranged to each other in such a way that two adjacent subsets of a pixel generate orthogonal exit polarization of the light out of the SLM.

    [0198] The adding of a further single polarizing filter behind the SLM, seen in the propagation direction of the light, provides a single light exit polarization state, which contains two mutually incoherent wave fields both carrying a part of the three-dimensional object scene.

    [0199] This wave field can now propagate through all optical elements of the display device, which follow within the beam path regardless of the polarization selectivity of these elements. For example, a polarization-type LC grating following the SLM in the beam path has to be illuminated with circular polarized light; a retarder has then to be used for providing the required polarization state of the wave field illuminating it.

    [0200] Also, in two-dimensional (2D) encoding of holograms an arrangement of color filter stripes can be used in the SLM plane. For this it might be more complex since an initial pixel aperture of the pixel of the SLM, which can be e.g. 33 μm times 33 μm for a holographic three-dimensional desktop display device, has to be divided into at least three sub-pixels or three subsets or generally into three defined parts of the pixel. FIG. 15 shows a binary amplitude transmission of an SLM having rectangular shaped pixel apertures and a fill factor of approximately FF=0.25 only. Here, 10×10 pixels are shown again, as example. This is equivalent to using the lower right quarter of the pixel apertures shown in FIG. 6, i.e. ¼ of the maximum aperture only. As a matter of course, it can also be used different defined parts of the pixel, for example the upper left quarter of the pixel.

    [0201] FIG. 16 shows an intensity distribution of the Fourier transformation of the intensity distribution shown in FIG. 15. This intensity distribution is generated in the plane of an eye of an observer. The central peak in the illustration shows the intensity of the 0.sup.th diffraction order. The small fill factor of FF=0.25 of the SLM increases the intensity of the higher diffraction orders. It can be seen that it is possible to implement e.g. three sub-pixels within the e.g. Initial 33 μm times 33 μm pixel size while keeping the higher diffraction orders, which exist within the plane of the viewing window in the observer plane, within an acceptable limit.

    [0202] A sub-pixel or a subset of the pixel comprising a color filter segment of the arrangement of color filter stripes relating to one of the primary colors RGB has an extension of e.g. 16 μm times 16 μm only. It is probably expensive to realize pixels as small as this. However, it could be possible in a few years without high technical effort. In addition, a small critical dimension is required within the manufacturing of the pixels in order to keep the fill factor as high as possible. Thus, e.g. a critical dimension of 3 μm might be required in order to realize color filters within a two-dimensional encoded complex-valued SLM.

    [0203] Furthermore, an arrangement of two-dimensional color filter stripes might be combined advantageously with an arrangement of patterned retarders designed e.g. as orthogonal polarization filters. However, this could reduce the practical critical dimension in the manufacturing of the SLM e.g. down to 2 μm only. The initial pixel size of e.g. 33 μm×33 μm has to be divided e.g. into 6 defined part or subsets of the pixel or sub-pixels. This means three colors in relation to the color filter stripes and two additional patterned polarization filters. Each polarization filter is assigned to a triplet of color filter stripes. Thus, each primary color RGB is represented by two small subset of the pixel. The two subsets of the pixel emit orthogonally polarized light.

    [0204] For example, each pixel aperture shown in FIG. 14 can be sub-divided into e.g. three color subsets of the pixel. This requires, however, a significant technological effort and might therefore not be the fastest way to an initial product.

    [0205] In addition to rectangular arrangements of the apertures of the pixels of an SLM also e.g. hexagonal arrangements of the apertures of the pixels may be used. These arrangements can also be provided with an arrangement of patterned retarders, preferably patterned polarization filters, and/or an arrangement of patterned color filter stripes.

    [0206] The probably more practical realization of two orthogonal polarizations of the light emitted by the SLM could be in general to encode a wedge function into the sub-hologram of the SLM. In this manner object points within the angular range spanned by the viewing window can be shifted laterally. For a two-dimensional encoding of a hologram this can be done along the vertical direction as well as for the horizontal direction. In other words, a left separation and a right separation of a quadratic/square area of a pixel, as can be seen e.g. in FIG. 14, can generate a horizontal separation, which is a left separation and a right separation of adjacent orthogonally polarized retinal point spread functions. An upper separation and a lower separation of a square area of a pixel can generate a vertical separation, which is an upper separation and a lower separation of adjacent orthogonally polarized retinal point spread functions. This also applies if the initial quadratic area of the pixel shape within the SLM plane is divided into an upper rectangular and a lower rectangular part or subset. Such an SLM would be if the SLM shown in FIG. 14 would be rotated about 90 degrees clockwise or counterclockwise. This is shown in FIG. 17, in which an arrangement of polarization filters in an SLM plane is illustrated, where the arrangement of polarization filters is arranged orthogonal to the one of FIG. 14.

    [0207] Recapitulating, according to the present invention there are for example two or even more subsets of wave fields generated by an SLM of the display device, which are mutually incoherent. In the case of a one-dimensional encoding an arrangement of color filter stripes, an arrangement of patterned retarders, particularly an arrangement of polarization filters having orthogonal polarization, or combinations thereof can be used in order to provide mutually incoherent subsets of wave field partially representing a three-dimensional object or scene. As for the case of two-dimensional encoding in one-dimensional encoding it is also possible to illuminate an SLM with light that has two orthogonal states of polarization and that is emitted from different light sources in the illumination unit. This light can illuminate a striped pattern of a polarization filter, which has an alternating orientation of the polarization state transmitted. Also, as for the case of two-dimensional encoding the polarization filter is followed by an additional non-patterned retarder, particularly a polarization filter, which transmits a single polarization state only. It could be that light gets lost here. But there are two mutual incoherent encoded wave fields now, which illuminate the optical elements following the SLM in the beam path of the display device. It can also be encoded an additional phase wedge in a sub-hologram along one direction. In contrast to the case of one-dimensional encoding, the two-dimensional encoding offers the realization of arbitrary shaped two-dimensional phase wedge functions encoded in the sub-holograms of the SLM. Only one subset of the potential two-dimensional wedge distributions is needed for that.

    [0208] An advantageous polarization encoding pattern of adjacent object points is given by a checkerboard-like distribution, which is applied for the object points reconstructed. Furthermore, a honey comb-like distribution may also be used, which also provides two orthogonal polarizations. This is provided in the plane of the object points or in the plane of the retina of an eye of an observer in the case the observer focuses on the object point. Furthermore, it is also possible to use other e.g. random distributions of the mutual incoherent pattern.

    [0209] In FIG. 18 an illustration of a checkerboard-like allocation pattern of orthogonal polarization states is shown, which refers to three-dimensional object points reconstructed in space or on the retina of an eye of an observer in the case the observer focuses on these object points. Object points can be generated at different grids in space. In FIG. 18 the polarization state of 98 pixel times 98 pixels reconstructed in space can be seen. This is e.g. only one plane of the object. In the three-dimensional space adjacent depth planes can comprise alternating allocation pattern. This means that object points having the same x-coordinate (horizontal direction) and y-coordinate (vertical direction) but are placed at adjacent depth planes can preferably have orthogonal polarization states. In other words, the polarization state allocation pattern shown in FIG. 18 can be used along the z-direction (depth direction, i.e. parallel to the optical axis of the display device) in an alternating way, i.e. the polarization states are inverted for adjacent z-planes.

    [0210] This simple grid of FIG. 18 may also be changed to a hexagonal honeycomb-type grid. It is also possible to arbitrary change the initial pattern related to the content of the scene. However, this will probably further increase the complexity of the optimization of the encoding process. Furthermore, the polarization state allocation pattern may be changed in two dimensions (x and y direction) as well as along the z-coordinate. The simplest approach, however, could be to use a fixed pattern along the vertical direction (y-direction) and the horizontal direction (x-direction) and invert it in an alternating way along the depth direction (z-direction), which is the distance to the observer or the distance of the different z-planes to each other.

    [0211] The following explanations refer to the calculation of point spread functions PSF.sub.ij for retinal inter object point crosstalk reduction relating to head-mounted displays. However, as it can be seen before, the retinal point spread function optimization can be used for all types of sub-hologram based holographic display devices, for one-dimensional encoding and two-dimensional encoding too. Consequently, the invention can also be used for direct view display devices, e.g. for desktop display devices using two-dimensional encoding of television display devices using one-dimensional vertical parallax only encoded holograms.

    [0212] The most simple case is one-dimensional encoded vertical parallax only (VPO) holography, which uses vertical orientated sub-holograms. If an light source of an illumination unit is adapted in order to provide optimized absolute value of the complex degree of coherence, i.e. it is not a simple point light source, it can be ensured that only the pixels of one vertical line that have a mutual distance of equal or less than the size of the largest sub-hologram are mutually coherent.

    [0213] Assume one-dimensional vertical parallax-only encoding and tailored illumination of the illumination unit the optimization of adjacent point spread functions can be carried out along one direction and for each column of the SLM separately. Furthermore, only close neighbors of the discrete point spread function to be optimized have to be considered.

    [0214] For example, taken a sub-hologram of the upper left corner of the SLM each color can be provided separately, and the retinal point spread functions PSF.sub.ij can be calculated then. The index i may be used to mark the column of the SLM and the index j can be used to mark the row of the SLM used in the calculation process. These are indices of a retinal grid of object points generated in space. These indices may also be used to indicate the discrete sub-holograms relating to the retinal object points. A defined diameter of the entrance pupil of the human eye can be assumed in relation to the brightness of the scene, which is e.g. 2.9 mm for 100 cd/m.sup.2. It could be that all non-optimized sub-holograms are already generated or that they will be just generated after each other. For example, it is assumed that all non-optimized sub-holograms were already generated. Then a first point spread function PSF.sub.11 is calculated.

    [0215] The computational load of the optimization process can be concentrated on the high definition (HD) cone. This means a high definition 1/60 degrees resolution can only be seen in a central cone, e.g. at an angle of approximately 10 degrees. During the optimization it can be concentrated primarily on that central cone. Thus, more power in the high definition (HD) cone can be used as for other areas, e.g. at the edge of the retina. For a single observer in the observer plane one high definition cone per eye and color is provided. The number of cones depends on the number of observers. Gaze tracking is required to provide the high definition cone accurately. This means that it is preferred to integrate gaze tracking in the display device.

    [0216] Moreover, thinned objects in the non-high definition cone area can be used. For example, at the rim of the field of view, 4×4 thinning can be used for a two-dimensional encoding, as long as the object points are reconstructed with 16 time larger brightness. This is not a problem because the optical energy per area is kept constant. For a two-dimensional encoding it may be used every fourth object point only along the vertical direction and along the horizontal direction. For vertical parallax-only encoded holograms a four times thinning can only be carried out along the columns of the SLM.

    [0217] It can also be possible to project one high definition cone per eye and color into a low resolution frustum. This might be a combination of a direct view display device and a projection display device. Or it can be a combination of a large low resolution frustum generating display device and a high definition cone generating display device, which is defined by the gaze tracking data. However, this could add possibly significant technological effort.

    [0218] Coming back to a second sub-hologram of the vertical parallax-only encoded hologram example, which generates a point spread function PSF.sub.12 on the retina of an eye of an observer. Now, e.g. only the second sub-hologram is changed, i.e. the phase offset to the point spread function PSF.sub.11 and the intensity value of the point spread function PSF.sub.12 are changed in order to obtain the target intensity of the point spread function PSF.sub.11 plus the point spread function PSF.sub.12, which is the design intensity. That is to say e.g. it is used a phase offset and an intensity change. Then, a point spread function PSF.sub.13 is placed adjacent to the two coherently added point spread functions PSF.sub.11 and PSF.sub.12. Now, it is used once again e.g. a phase offset and an intensity change to change the initial point spread function PSF.sub.13 in order to obtain the design intensity distribution of the coherent sum of the point spread functions PSF.sub.11+PSF.sub.12+PSF.sub.13. This can be demonstrated by the way from j to j+1 to j+2 . . . j+N, i.e. to the last point spread function PSF.sub.ij formed by the discrete column of the SLM, here column 1. Then, the next column of the SLM is carried out. For vertical parallax-only encoded holograms the optimization process made along the columns of the SLM can be carried out in parallel. This is due to the fact that the columns of the SLM are mutually incoherent if using the tailored illumination. To make and keep the calculation and optimization algorithm fast and simple the peak intensity value of the point spread function provided locally on the retina can be used as criterion for the optimization process. It could still make sense to e.g. use the integral intensity value of an angular range of 1/60 degrees instead the single peak intensity value. The difference is, however, small. Using e.g. three or more sampling points of a single point spread function for the optimization may add more effort, i.e. more computational load.

    [0219] For a two-dimensional encoding of holograms the optimization can be carried out in an analogue way to the one-dimensional encoding of holograms. It may be started e.g. in the upper left corner of the sub-holograms or the retinal point spread functions PSF.sub.ij of the object points. A first point spread function PSF.sub.11 is formed and a second point spread function PSF.sub.12 is added. This summed up point spread function is optimized by using a phase offset and a change of the intensity if required. Then, e.g. a point spread function PSF.sub.21 is added and optimized using phase and intensity too. Now, a point spread function PSF.sub.22 is added and the phase offset and the intensity value on demand are changed. Then, e.g. a point spread function PSF.sub.13 is added and the phase offset and the intensity value are optimized. Next indices of the point spread function PSF.sub.ij may be e.g. 23 and 31, followed e.g. by 14 and so on. This means, for example, it can be started from the upper left corner of the sub-hologram and to fill and optimize the scene step by step until the lower right corner is reached.

    [0220] Different paths for this optimization process may be used. For example, it can be started with a point spread function PSF.sub.11 and then go to point spread functions PSF.sub.12, PSF.sub.13, PSF.sub.14, . . . to PSF.sub.1N, where N is the number of vertical object points to be generated, e.g. 1000 object points or even 2000 object points. The number of the object points being generated horizontally in M might be e.g. 2000 to 4000. In detail, this could mean that at first the first column of the sub-hologram is filled and completed and then step by step the elements of the second column are added, i.e. point spread function PSF.sub.21, PSF.sub.22, PSF.sub.23, PSF.sub.24, . . . to PSF.sub.2N. Here, the step by step filling and optimizing is carried out from the left hand side to the right hand side of the sub-hologram. In this manner a two-dimensional matrix in M,N can be created.

    [0221] This optimization continuing along a predefinable direction on the SLM can also be carried out in a parallel way, e.g. if a multi-core integrated circuit is used. Thus, the starting points in the sub-hologram can be chosen in an arbitrary way or at least several starting points can be chosen. If locally optimized zones (zones that are filled during the optimization) of the sub-hologram hit each other, then the transition zones can be optimized. This can already be done if the mutual gap is e.g. five point spread functions PSF.sub.ij only. This means that a point spread function may be added to the rim of one zone and the small part of the rim of the neighboring zone can already be considered during the filling of the gap, which exists between two adjacent zones.

    [0222] Randomized local optimization using multiple randomized starting points may be used to avoid the appearance of artificial and disturbing low spatial frequency modulations. The optimization process can be made simple by only using a phase offset and intensity offset of single point spread functions PSF.sub.ij.

    [0223] For increasing the calculation speed, which might be required for real time applications, a look-up-table (LUT) can be used for image segments that can already be optimized in advance, as e.g. lines, surfaces, triangles and small separated objects.

    [0224] If gaze tracking data are already used, e.g. in order to use the 10 degrees high definition cone approach in e.g. direct view displays, and if the eye tracking data are used to obtain the diameter of an entrance pupil of an eye of an observer, the point spread functions of the eye picking up the object points in space can be monitored. This means that point spread function data can be used that are closer to the real situation. Thus, better optimization results can be obtained. A look-up-table can also be used to represent different point spread functions of the human eye, i.e. different diameter of the entrance pupil of the eye and different focal length f.sub.eye.

    [0225] The optimization process described for a head-mounted display can be used, of course, for other display devices, as e.g. direct view display devices, projection display devices.

    [0226] Finally, it must be stated explicitly that the embodiments to the display device described according to the invention shall solely be understood to illustrate the claimed teaching, but that the claimed teaching is not limited to these embodiments. Combinations of embodiments are also possible.