Systems and Methods for Convergent Angular Slice True-3D Display
20190007677 ยท 2019-01-03
Assignee
Inventors
Cpc classification
G02B30/35
PHYSICS
International classification
Abstract
Systems and methods for convergent 3D displays. In one embodiment, the 3D display has a display screen that includes a convergent reflector and a horizontally narrow angle diffuser. The convergent reflector focuses 2D images projected on the diffuser from an array of 2D image projectors to form viewpoints in an eyebox where one viewpoint corresponds to one projector. At a particular viewpoint, the viewer's eye sees a full-screen field of view from a corresponding projector in the array. The narrow angle diffuser diffuses incident rays projected from the 2D image projectors into narrow angular slices so that the views in the eyebox are continuously blended together. The system and methods provide advantages in that only a few projectors are required in the array to provide the viewer with a full-screen field of view and a sufficiently large eyebox for comfortable viewing.
Claims
1. A system comprising: one or more 2D image projectors; and a display screen optically coupled to said 2D image projectors; wherein the 2D image projectors are configured to project individual 2D images substantially in focus on the display screen; wherein the display screen is configured to optically converge each projected 2D image from the corresponding 2D image projector to a corresponding viewpoint, wherein the ensemble of said viewpoints form an eyebox; wherein each pixel from each of the 2D images is projected from the display screen into a small angular slice to enable a viewer within the eyebox observing said display screen to see a different image with each eye, wherein the image seen by each eye varies as the viewer moves his or her head with respect to the display screen;
2. The system of claim 1, wherein the 2D image projectors consist of one or more lasers and one or more scanning micro-mirrors optically coupled to the lasers, wherein the 2D image projectors are configured to lenslessly project the 2D images on the display screen.
3. The system of claim 1, wherein the 2D image projectors are driven by laser light sources such that the 2D image is substantially in focus at all locations.
4. The system of claim 1, wherein the system is configured to generate each of the 2D images from a perspective of the viewpoints in the eyebox, wherein each of the 2D images is provided to the corresponding projector.
5. The system of claim 1, wherein the system is configured to anti-alias the 2D images according to an angular slice horizontal projection angle between the projectors.
6. The system of claim 1, wherein one or more of the 2D images is obtained by rendering 3D data from one or more 3D cameras.
7. The system of claim 1, wherein the 2D image projectors are formed into a plurality of separate groups such that a plurality of the eyeboxes is formed whereby a plurality of viewers may each observe from the plurality of eyeboxes.
8. The system of claim 1, wherein a plurality of the 2D image projectors is configured such that the eyebox formed is large enough for a plurality of viewers.
9. The system of claim 1, wherein a shape of the display screen is selected from the group consisting of cylinders, spheres, parabolas, ellipsoids and aspherical shapes.
10. The system of claim 1, wherein the system is configured to render the 2D images from a 3D dataset.
11. The system of claim 1, wherein the system is configured to obtain one or more of the 2D images from still or video cameras.
12. The system of claim 10, wherein the system is configured to convert video streams into the 3D dataset and then render the 2D images.
13. The system of claim 11, wherein one or more of the 2D images is obtained by shifting or interpolation from others of the 2D images obtained from the said still or video cameras.
14. The system of claim 11, wherein the system is configured to substantially match proportionally a depth of field of said still or video cameras to a depth of field for the system.
15. A system comprising: one or more 2D image projectors; a display screen optically coupled to said 2D image projectors; and a converging optical element optically coupled to said 2D image projectors and said display screen wherein the 2D image projectors are configured to project individual 2D images substantially in focus on the display screen; wherein the converging optical element is configured to optically converge each projected 2D image from the corresponding 2D image projector to a corresponding viewpoint, wherein the ensemble of said viewpoints form an eyebox; wherein each pixel from each of the 2D images is projected from the display screen into a small angular slice to enable a viewer within the eyebox observing said display screen to see a different image with each eye, wherein the image seen by each eye varies as the viewer moves his or her head with respect to the display screen;
16. The system of claim 15, wherein the converging optical element is between the 2D image projectors and the display screen.
17. The system of claim 15, wherein the converging optical element is between the display screen and one or more viewers.
18. The system of claim 15, wherein a shape of the converging optical element is selected from the group consisting of cylinders, spheres, parabolas, ellipsoids and aspherical shapes.
19. A method comprising: generating multiple individual 2D images; and projecting the individual 2D images substantially in focus onto a display screen; wherein the display screen further projects the 2D images so as to optically converge each projected 2D image from the corresponding projected 2D image to a corresponding viewpoint, wherein the ensemble of said viewpoints form an eyebox; wherein each pixel from each of the 2D images is further projected from the display screen into a small angular slice within said viewpoint to enable a viewer within the eyebox observing said display screen to see a different image with each eye, wherein the image seen by each eye varies as the viewer moves his or her head with respect to the display screen.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Other objects and advantages of the invention may become apparent upon reading the following detailed description and upon reference to the accompanying drawings.
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031] While the invention is subject to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and the accompanying detailed description. It should be understood, however, that the drawings and detailed description are not intended to limit the invention to the particular embodiment which is described. This disclosure is instead intended to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims. Further, the drawings may not be to scale, and may exaggerate one or more components in order to facilitate an understanding of the various features described herein.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
First EmbodimentFIGS. 1-6
[0032] The present invention and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as not to unnecessarily obscure the present invention in detail.
[0033] One embodiment of the 3D display is illustrated in
[0034] In one embodiment, the display 101 provides horizontal parallax only (HPO) 3D imagery to the viewer 10. For HPO, the diffuser 45 reflects and diffuses incident light over a wide range vertically (say 20 degrees or more, the vertical diffusion angle is chosen so that adequate and similar intensity light reaches the viewer from the top and bottom of the diffuser), but only over a very small angle horizontally (say one degree or so). An example of this type of asymmetric reflective diffuser is a holographically produced Light Shaping Diffuser from Luminit LLC (1850 West 205th Street, Torrance, Calif. 90501, USA). Luminit's diffusers are holographically etched high efficiency diffusers, referred to as holographic optical elements (HOE's). Luminit is able to apply a reflective coating (very thin layer, conformable coating of, for example, aluminum or silver) to the etched surfaces to form reflective diffusers. Other types of diffusers (not necessarily HOE) with similar horizontal and vertical characteristics (e.g., arrays of micro-lenses) are usable along with other possible reflective coatings (e.g. silver/gold alloy). Similarly, thin film HOE diffusers over the top of a reflector can perform the same function.
[0035] In one embodiment, referring now to
[0036] Referring again to
[0037] Maximal (full screen) rays from each projector in the array 120 define a boundary for the eyebox 70 (
[0038] The extent of the eyebox 70 in
[0039] Spatial blurring is the apparent defocusing of the 3D imagery as a function of visual depth within a given scene. Objects that visually appear at the diffuser 45 are always in focus, and objects that appear further away in 3D space than the diffuser 45 have increasing apparent defocus. An acceptable range of spatial blurring around the diffuser 45 for the typical viewer is known as depth of field. A depth of field 94 is illustrated in
[0040] The horizontal angular displacement 20 and the diffuser 45 with limited horizontal angular diffusion are elements that work jointly to present 3D imagery to the viewer 10. In
[0041] For example, in
[0042] The drawings in
OperationFIG. 7
[0043] A block diagram in
[0044] The algorithm 600 uses a rendering step 640 to generate the appropriate 2D images required to drive each projector in the array 120. The rendering step 640 uses parameters from a calibration step 610 to configure and align the 2D images such that as the viewer 10 moves his head within the eyebox 70, he sees blended 3D imagery without distortions from inter-projector misalignments or intra-projector mismatches. A user (perhaps the viewer 10) is able to control the rendering step 640 through a 3D user control step 630. This step 630 allows the user to change manually or automatically parameters such as the apparent parallax among the 2D images, the scale of the 3D data, the virtual depth of field and other rendering variables.
[0045] The rendering step 640 uses a 2D image projection specific to each projector as defined by parameters from the calibration step 610. For a particular projector, the 2D image projection has a viewpoint within the eyebox 70, such as the viewpoint 11 in
Stacked Projector ArrayFIG. 8 and FIG. 9
[0046] An additional embodiment is shown in
[0047] Offset-In-Depth Viewer
[0048] An additional embodiment is shown in
[0049] Overhead Projector Array
[0050] An additional embodiment is shown in
[0051] Referring now to
[0052] Spherical Reflector
[0053] An additional embodiment is a 3D display 105 shown in
[0054] The advantage of this type of convergent angular slice true-3D display is that many fewer projectors are required to produce a full horizontal parallax 3D image (view changes continuously with horizontal motion) than with a flat-screen angular slice display (ASD). Note that the projectors can be located to the side of the viewer or below the viewer just as well as above the viewer.
[0055] Diffusion Before Convergence
[0056] An additional embodiment is shown in
[0057] The reflected chief rays (for example rays 191 and 192) from each projector converge to form viewpoints within an eyebox 72. Given a radius of curvature R for the mirror 50, the horizontal extent of the eyebox 72 is defined in a manner similar to the ray geometry in
[0058] Although a depth of field for the display 201 is centered at the diffusion screen 40, the apparent location of the depth of field to the viewer 10 follows convergent mirror geometry for object and image distances. For example in one embodiment, if the diffusion screen 40 is a distance 0.5 R from the mirror 50, then the apparent center for the depth of field approaches infinity.
[0059] Diffusion after Convergence
[0060] An additional embodiment is shown in
[0061] Full Parallax 3D Display
[0062] An additional embodiment is a full parallax 3D display. Full parallax means that the viewer sees a different view not only with horizontal head movements (as in HPO) but also with vertical head movements. One can think of HPO as the ability for the viewer to look around objects horizontally, and full parallax as the ability to look around objects both horizontally and vertically. Full parallax is achieved with a diffuser that has both a narrow horizontal angular diffusion and a narrow vertical angular diffusion. (Recall that HPO requires only narrow diffusion in the horizontal while the vertical has broad angular diffusion.) As noted previously, the angular diffusion is tightly coupled with the angular displacement of the projectors in the array. Again, recall that HPO requires proportionally matching the horizontal angular displacement 20 (
Advantages
[0063] From the descriptions above, a number of advantages of some embodiments of the angular convergent true 3D display become evident, without limitation: [0064] (a) No special glasses, head tracking devices or other instruments are required for a viewer to see 3D imagery, thus avoiding the additional cost, complexity, and annoyances for the viewer associated with such devices. [0065] (b) No moving parts such as spinning disks, rasterizing mirrors or shifting spatial multiplexers are required, which thereby increases the mechanical reliability and structural integrity. [0066] (c) Since image projectors, by construction, project 2D images such that rays diverge from the projector lens, the use of a convergent reflector has the advantage of focusing these rays into the eyebox. This property makes rendering the 2D images to form the 3D imagery simpler since standard projection geometries, where horizontal and vertical projection foci share approximately the same location, are used to form the 2D images without the need for non-standard projections such as anamorphic where horizontal and vertical projection foci do not share the same location. Thus, 2D imagery from digital (still or video) cameras with standard lens can be used to drive the projectors directly without additional processing to account for the divergent projector rays. [0067] (d) The convergence at the eyebox of the projected 2D images permits the use of a single projector in the array to achieve a full-screen field of view to a viewer in the eyebox. Additional projectors simply increase the size of the eyebox and the parallax in the displayed 3D imagery for the viewer. Thus, only a few projectors (nominally two or more) are required for viewing full-screen 3D imagery, which reduces system cost. [0068] (e) The separation of the diffuser and the convergent mirror permits the adjustment of the apparent center for the depth of field (relative to the viewer) in accordance with convergent mirror geometry for object and image distances. This adjustment has the advantage to display 3D imagery with an apparent depth of field required by a particular application.
[0069] Accordingly, the reader will see that the 3D display of the various embodiments can be used by viewers to see 3D imagery without special glasses, head tracking or other constraints. The viewer sees different views with each eye and can mover his head to see different views to look around objects in the 3D imagery.
[0070] Although the description above contains many specificities, these should not be construed as limiting the scope of the embodiments but as merely providing illustrations of some of several embodiments. For example, the convergent reflectors can have different shapes such as cylindrical, spherical, toroidal, etc.; the display screen can consist of a single convergent reflective diffuser, of a transmitting diffuser followed by a convergent mirror, of a convergent mirror followed by a transmitting diffuser, etc.; the 2D images driving the image projectors can be derived from renderings of 3D data, video streams from one or more cameras, video images converted to 3D data and then rendered, etc.
[0071] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the claims. As used herein, the terms comprises, comprising, or any other variations thereof, are intended to be interpreted as non-exclusively including the elements or limitations which follow those terms. Accordingly, a system, method, or other embodiment that comprises a set of elements is not limited to only those elements, and may include other elements not expressly listed or inherent to the claimed embodiment.
[0072] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention as detailed within the following claims.