AUGMENTED NEAR TO EYE DISPLAY
20220390746 · 2022-12-08
Assignee
Inventors
Cpc classification
G02B26/0833
PHYSICS
G02B27/0093
PHYSICS
G02B6/0036
PHYSICS
G02B6/0016
PHYSICS
International classification
Abstract
A mobile augmented reality near to eye display having one of a single chip programmed, configured, or adapted to permit user selective field of view and a variable resolution image projection, or a single image guide adapted to multiplexed full color image transfer to achieve full color, 90 degrees FOV, and retinal image resolution.
Claims
1. An image guide device, comprising: a digital micromirror device (DMD) with an illumination source optically couple thereto; wherein a plurality of wavelengths from the illumination source each have a total field of view (FOV) and the DMD divides the total FOV into sub-FOVs; an image guide having an input grating and output grating, the input grating optically coupled to the DMD such that the image guide receives the plurality of wavelengths with sub-FOVs from the DMD at the input grating; a holographic waveguide optically coupled to the output grating of the image guide such that the holographic waveguide receives the plurality of wavelengths with sub-FOVs and multiplexes the plurality of wavelengths to the total FOV.
2. The device of claim 1, wherein the holographic waveguide comprises one or more Bragg reflectors.
3. The device of claim 2, wherein the Bragg reflectors are positioned at a 15-degree angle.
4. The device of claim 1, wherein the illumination source is an LD or LED array.
5. The device of claim 1, wherein the illumination source is a 2D light source array.
6. The device of claim 1, further comprising detection optics optically coupled to the holographic waveguide, wherein the detection optics capture the total FOV from the holographic waveguide.
7. An image guide device, comprising: a primary digital micromirror device (DMD) with an illumination source optically couple thereto; wherein a plurality of images from the illumination source each have a total field of view (FOV) and the primary DMD divides the total FOV into sub-FOVs; an image guide having an input grating and output grating, the input grating optically coupled to the primary DMD such that the image guide receives the plurality of images with sub-FOVs from the primary DMD at the input grating; a secondary DMD optically coupled to the output grating of the image guide such that the secondary DMD receives the plurality of images with sub-FOVs and redirects the plurality of images with sub-FOVs over the total FOV.
8. The device of claim 7, wherein the illumination source is an LD or LED array.
9. The device of claim 7, wherein the illumination source is a 2D light source array.
10. The device of claim 7, further comprising detection optics optically coupled to the secondary DMD, wherein the detection optics capture the total FOV from the secondary DMD.
11. An augmented reality near to eye display system, comprising: an Angular Spatial Light Modulator (ASLM) emitting pulses of light, each pulse of light being spatially modulated with an image, and angularly modulated with a direction; a waveguide with an input coupler and an output coupler, wherein the input coupler is configured to couple the doubly modulated pulses of light from the ASLM into the waveguide, and the output coupler is configured to couple the pulses of light out of the waveguide.
12. The system of claim 11, wherein the ASLM comprises an illumination source array and a Spatial Light Modulator (SLM), and the pulse of light being angularly modulated is due to changing illumination sources.
13. The system of claim 11, wherein the ASLM comprises an illumination source and a Digital Micromirror Device (DMD), and the pulse of light being angularly modulated is due to diffraction-based beam steering, and each direction is a diffraction order.
14. The system of claim 11, wherein the ASLM comprises an illumination source array and a Digital Micromirror Device (DMD), and the pulse of light being angularly modulated is due to diffraction-based beam steering and changing illumination sources.
15. The system of claim 11, wherein the input coupler is an array of input couplers, and each input coupler is further configured to receive doubly modulated pulses of light of a unique direction.
16. The system of claim 11, further comprising a lens array configured to form the doubly modulated pulses of light into an intermediate image array before the waveguide.
17. The system of claim 11, wherein the ASLM further modulates the pulses of light by wavelength, and the output coupler is wavelength multiplexed.
18. The system of claim 11, wherein the ASLM further modulates the pulses of light by polarization, and the output coupler is polarization multiplexed.
19. The system of claim 11, wherein the output coupler comprises a volume hologram.
20. The system of claim 11, wherein the output coupler comprises a dichroic coating.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0039] The present invention will be more fully understood and appreciated by reading the following Detailed Description in conjunction with the accompanying drawings, in which:
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
DETAILED DESCRIPTION OF EMBODIMENTS
[0062] The present disclosure describes an augmented reality near to eye display.
Single-Chip, FOV Selective, and Variable Color Bit Depth Image Generation
[0063] A single-chip and multi perspective image display having an effective pixel count of 1.13 G pixels has been demonstrated. In the demonstration, a Digital Micromirror Device (DMD) 200 is synchronized to an arrayed pulsed illumination source (10×12=120) so that different images (1024×768 pixels) are steered into 12 diffraction orders. This time multiplexed image has an effective pixel count of (10×12)×(1024×768)×12=1.13 G pixels that enhances native pixel counts of the DMD 200 by 1440. In
[0064] Referring to
[0065] Referring to
[0066] First, the FOV is limited to the supported angular bandwidth of the image guide 202. The full FOV of 90 degrees is divided into multiple sub FOVs, i.e., 30 degrees by wavelength multiplexing as described in the later section.
[0067] Consider a DMD 200 with M (Horizontal)×N (Vertical) pixels. Next further dividing the sub-FOV=30 degs into Ndiff segments where Ndiff is number of diffraction orders. The FOV of the i-th sub divided, FOVsub_i=(30/Ndiff). To satisfy the resolution requirement of 1 arcmin/pixel, M=60×FOVsub/Ndiff is needed. As an example, Ndiff=9, FOVsub=30 [deg], native pixel counts of DMD 200 in horizontal direction M=200 pixels. Suppose the number of vertical pixels N=360 (=1.8×M) pixels and 5 illumination angle multiplexing (Nilm=5) is employed, the 30(H)×30(V) sub FOV is divided in to 9×5 sub image area with 3.33(H)×6(V) degrees with 1 arcmin resolution. The 30 (H)×30 (V) degrees and tiled image 204 is generated by 200×360 pixel DMD projected by a projection lens and coupled to image guide 202 via input coupler.
[0068] In some embodiments, the multiple output diffraction orders are replaced with multiple output directions due to multiple input source directions.
[0069] Referring to
[0070] Advantages of the tiled and time multiplexed approach are 1) a substantial reduction of the number of physical pixels, no longer requiring a micro display with native 10Mega pixels, 2) a reduction of micro display size (1.08×1.94 mm with 5.4 um DMD pixel, as compared to 16×5.4 mm for 10Mega with 3 um LCOS), and 3) decreased power consumption for display and illumination by a content-specific sub FOV selection as compared to full 10Mega pixel based approach that is described in later section.
[0071] For mobile AR-NED, pixel on/off ratio is substantially small compared to a VR headset. In backlit micro displays such as LCD and LCOS used for VR headset, all the pixels need to be illuminated even though part of the display area is turned off (filtering is optically inefficient). In contrast, the segmented approach allows images to be displayed within part of the FOV without illuminating the parts of the FOV with no information, a typical scenario for mobile AR-NED as
[0072] As the example figure shows, image/text 300 is displayed in conjunction with see-through image 302; therefore, it is not likely that image is displayed over the entire FOV of 90 degrees because such full FOV image would obscure and congest the see-through image.
[0073] The proposed approach steers the image to the location where the image is displayed; therefore, the power consumption for illumination is reduced as compared to the image formed by high pixel count micro display. The power advantage occurs due to not losing additional light in unused areas of the FOV, and due to not requiring display actuation to steer light into unused areas of the FOV. In addition, power consumption is further decreased by reducing the bit depth of the images out of the region of the interest by eye-tracking that detects gaze as described in later section. (Increased power efficiency and optical efficiency from foveated rendering.)
Single Layer and Multiplexed Full Color Wide FOV Image Transfer
[0074] The index of refraction of the image guide device 400 (
[0075] To overcome this material-imposed challenge, a time and wavelength multiplexed full color image transfer medium that effectively generates RGB, 90-degree FOV, and retinal resolution may be used as depicted in
Time and Wavelength Multiplexing
[0076] For the purpose of illustration of the principle, only green light sources λ.sub.G1, λ.sub.G2, and λ.sub.G3 (and neighboring green wavelength sources) are considered. The key is to divide total FOV of 90 degrees into sub FOVs, SubFOV.sub.i, and encode them in wavelength domain, and decode it by reflection volume hologram 406 (
[0077] The time/wavelength multiplexing is also employed to other wavelength λ.sub.Ri, and λ.sub.Bi. For example, sources (λ.sub.R1, λ.sub.G1i, λ.sub.B1) generate an FOV-limited image I.sub.1, sources (λ.sub.R2, λ.sub.G2i, λ.sub.B2) generate an FOV-limited image I.sub.2, and so on. As far as separation among the neighboring wavelength λ.sub.RGBi+1−λ.sub.RGBi is not large, on a CIE-XYZ color map, color reproducibility is assured, and is subject to angular and wavelength selectivity. As a holographic material, RGB-sensitive materials such as Byfol® HL or other materials are commercially available and used in the feasibility study.
[0078] Wavelength multiplexing can be replaced with or complemented by other multiplexing techniques for encoding/decoding before/after the waveguide such as polarization multiplexing.
Time and Angular Multiplexing
[0079] Alternatively, a 2nd DMD 504 at the vicinity of the output coupler replaces the multiplexed volume hologram 404. The 1st generates a time and color multiplexed images with FOV smaller than that of full color FOV of the image guide 202. The image guide 202 transfers the FOV-limited images by TIR that is coupled to air by an output coupler. The 2nd DMD 504 is synchronized and actively redirects light over the total FOV. This approach eliminates the multi-wavelength sources and Bragg reflector. However, an additional optical system close to the eye is needed and designed to secure a see-through optical path.
Estimation of Power Consumption
[0080] According to a published document, “DLP Technology for Near Eye Display: Application Report” (http://www.ti.com/lit/an/dlpa051a/dlpa051a.pdf), page 11 states, “The DMD and controller combine to draw a typical power consumption of between 150 mW to 300 mW, depending on the array size and resolution.” Also in “DLP2010 0.2 WVGA DMD” (http://www.ti.com/lit/ds/symlink/dlp2010.pdf), page 10 states a typical supply power dissipation of 90.8 mW. (DMD only). According to a source, without employing time multiplexing, DLP is competitive, in terms of system power consumption, with other display solutions of the same resolution. Some LCOS competitors may have a chipset power consumption that is slightly lower, but that gap is closed, or even flipped, when LED illumination power is taken into the equation thanks to the optical efficiency advantage of DLP technology.
[0081] In first order analysis, power consumption of the DLP device linearly scales with array size for a given mirror refresh rate because in DLP systems, most of the power is consumed in 1) storing address information on SRAM underneath the micro mirror array, and 2) applying voltage to initiate and terminate micro mirror motion. For other pixel-addressed micro display such as LCOS, a similar scaling of the power consumption with array size is expected.
[0082] Since the requirement of illumination power per pixel is a human factor, it is therefore reasonable to assume it is independent to the type of the micro display device. Under this assumption, based on the documented power consumption of DLP and statement, it is estimated a rough order of magnitude power consumption of 10M pixel equivalent ASLM as tabulated in Table 2, based on documented power consumption as tabulated in Table 1.
TABLE-US-00001 TABLE 1 Baseline power consumption data 1M pixel DMD 1M pixel LCOS 1. Device [mW] 90 50 2. Controller [mW] 130 70 3. Illumination [mW] 100 200 Total [mW] 320 320
TABLE-US-00002 TABLE 2 Estimated power consumption of 10M pixel device 10M pixel ASLM 10M pixel LCOS 1. Device [mW] 162 500 2. Controller [mW] 234 700 3. Illumination [mW] 600 2000 Total [mW] 996 3200
Effective “on” pixel count: 30% for both ASLM and LCOS. For ASLM: bit depth is halved for 80% of FOV (foveation)
[0083] Compared to a fictitious 10Mega pixel LCOS, ASLM consumes about ⅓ of 10Mega LCOS display. The most significant reduction in power occurs in illumination. LCOS device requires a flood illumination of the entire 10Mega pixel array, including off pixels where light is simply wasted. In contrast, ASLM with image steering is more efficient because only sub image areas containing on pixels, i.e., 30% of total of the full 10Mega pixels need to be illuminated. There are power consumption benefits in the device and controller too, since the area without information is simply skipped while ASLM scanning over the entire FOV.
[0084] A second significant reduction of power consumption in ASLM device and ASLM controller is by “Color foveation”. In ASLM, the total FOV is divided into sub FOVs. Suppose eye/gaze tracking is available, color bit depth (refresh rate) of micro mirrors is “Foveated”. For example, pixels around gaze direction, full color bit depth is displayed, while peripheral of the FOV, color bit depth is set to low (i.e., halved). For example, 20% of pixels are around gaze, 80% is in peripheral FOV, and bit depth is halved), the foveation of color reduces number of activating mirrors by 40% which directly reduces power consumption of DLP device and controller with the same rate. This advantage occurs because bit-depth of the DLP device is related to actuation speed of the DLP device, and actuation speed of the DLP device is related to driving power of the DLP device.
[0085] The preliminary analysis as tabulated in Table 2 indicates that ASLM is very competitive to alternatives in power consumption, thanks to the substantially smaller array area that increases efficiency in illumination as well as effective allocation of color bit depth over the FOV. Peripheral optics benefits in size reduction by the reduced array size. We incorporate the power consumption analysis and measurement as a part of research.
Framerate:
[0086] A similar discussion to power consumption holds for the frame rate. Refresh rate of DMD 200 is rated 23 kHz. ASLM requires all-off state, therefore frame rate is half of 23 kHz=11.5 kHz, compared to 60 Hz LCOS device, DLP has 200× higher frame rate. When 10 bit is allocated for color generation (factor of 0.1/3(RGB)), while taking into account the enhancement of frame rate by 1/{(effective on-area)×(Color Foveation)}=5.5, enhancement of frame rate is estimated as 11.5 kHz×(0.1/3)×5.5=2.1 kHz that allows 40 time multiplexing which in first order matched to the proposed number of time multiplexing.
[0087] To further evolve aspects of the invention, additional embodiments are considered below.
[0088] It is a goal of an embodiment of the present invention to demonstrate feasibility of monochromatic, time, wavelength, and angular multiplexed 1-D image transfer by ASLM via a FOV limited free-space optics, by designing and developing test set up. In
[0089] The system design should have bandwidth allocation with an established mathematical model of optical architecture. The ASLM display design can be implemented with a 1-D array optics and various types of illumination sources. The system design also includes an in-house holographic recording setup to record volume hologram with 2 multiplexed Bragg reflector. The system design includes a DMD, optics, and light source assembled to demonstrate an ASLM single layer image transfer concept. FOV(H) extends beyond FOV(H) of the image guide by the volume hologram and 2nd DMD.
[0090] It is a goal of an embodiment of the present invention to demonstrate feasibility of monochromatic time, illumination, wavelength, and angular 2-D multiplexed image transfer by ASLM 600 via an FOV-limited image guide 602 by improving the test set up as depicted in
[0091] It is a goal of an embodiment of the present invention to demonstrate the feasibility of RGB time, illumination, wavelength and angular 2-D multiplexed image transfer by ASLM 600 via an FOV limited image guide 202, by improving test set up as depicted in
[0092] To demonstrate earlier state of the art, the red, green, blue (RGB) bandwidth limitation of waveguides is depicted in
[0093] In an embodiment shown in
[0094] In an embodiment shown in
[0095] In an embodiment,
[0096] In an embodiment,
[0097] In an embodiment,
[0098] In an embodiment,
[0099]
[0100] In an embodiment,
[0101] In an embodiment, source multiplexing (wavelength, polarization, etc) can be used to increase the output FOV using an output coupler with equivalent multiplexing (e.g., wavelength or polarization multiplexing output coupler, (e.g., volume hologram that is wavelength or polarization dependent)), shown on the bottom of
[0102] While various embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, embodiments may be practiced otherwise than as specifically described and claimed. Embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
[0103] The above-described embodiments of the described subject matter can be implemented in any of numerous ways. For example, some embodiments may be implemented using hardware, software or a combination thereof. When any aspect of an embodiment is implemented at least in part in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single device or computer or distributed among multiple devices/computers.
[0104] Each of the Following References are Incorporated Herein by Reference: [0105] [1] B. Hellman and Y. Takashima, “Angular and spatial light modulation by single digital micromirror device for multi-image output and nearly-doubled étendue,” Optics Express 27, 21477-21496 (2019) [0106] [2] Private preview of 1G-pix display at Industrial Affiliates Workshop meeting at the University of Arizona, 1630 E University Blvd, Tucson, Ariz., USA, on Oct. 23, 2019. [0107] [3] B. Smith, B. Hellman, A. Gin, A. Espinoza, and Y. Takashima, “Single chip lidar with discrete beam steering by digital micromirror device,” Optics Express 25(13), 14732-14745 (2017). https://www.osapublishing.org/oe/abstract.cfm?&uri=oe-25-13-14732 [0108] [4] S. S. Orlov, W. Phillips, E. Bjornson, Y. Takashima, P. Sundaram, L. Hesselink, R. Okas, D. Kwan, R. Snyder. “High-transfer-rate high-capacity holographic disk data-storage system,” Applied Optics, 43:4902, (2004). [0109] [5] T. Nakamura, Y. Takashima, “Design of discretely depth-varying holographic grating for image guide based see-through and near-to-eye displays,” Optics Express 26, 26520-26533 (2018); https://www.osapublishing.org/oe/abstract.cfm?uri=oe-26-20-26520 [0110] [6] B. Hellman, T. Lee, J.-H. Park, Y. Takashima, “Gigapixel and 1440-perspective extended-angle display by megapixel MEMS-SLM,” Optics Letters 45(18), 5016-5019 (2020).