See Through Display enabling the correction of visual deficits

20170322422 · 2017-11-09

    Inventors

    Cpc classification

    International classification

    Abstract

    A See-Through Display System with the ability to correct visual deficits such as presbyopia, color blindness and poor night vision is disclosed. This invention enables the correction of visual deficits using camera(s), microdisplay (s), controlling circuit(s) with digital grayscale control and see through optics such as free form lens/mirror, half-mirror, diffractive and/or holographic optical element(s).

    Claims

    1. A display system comprising: A microdisplay device having A grayscale control system using pulse-width-modulation and A set of solid state light sources having at least two colors and A control system to drive said microdisplay and said light sources and A set of optical elements including at least one of free-form mirror, half-mirror, Fresnel mirror, HOE and DOE and A set of optics enabling see-through capability whereby a user simultaneously can see both the visual field in front of the display system and the projected image by said microdisplay and Image capturing sensor(s) and Video processing unit(s) with algorithms designed to modulate still and moving video images that is capable of, but not limited to, the treatment of genetic, physiological, and psychological conditions involving the visual field including, but not limited to, presbyopia, myopia, hyperopia, cataract, retinitis pigmentosa and color blindness and Said algorithm corrects the weakness of the vision of a viewer by enhancing the video images from said microdisplay with at least one of the capabilities among the brightness increase of selected color(s), the changes of focal length of the objects captured by said image sensor(s), the change of the size of the objects, the brightness of the objects, and the change in contrast changing in brightness gap between two visual areas.

    2. The display system of claim 1 wherein: Said system increases visual acuity, defined as 1/gap size [arc min], by more than 1%

    3. The display system of claim 1 wherein: Said system improves color deficiency, defined at least one among the Ishihara Color Blindness Test score, and the ability to differentiate objects with wavelengths 564-580 nm, 534-545 nm, and 420-440 nm, by more than 1%

    4. The algorithm of claim 1 wherein: Said algorithm modulates the visual field by increasing the horizontal and vertical visual field arc length of an object in view

    5. The algorithm of claim 1 wherein: Said algorithm modulates the visual field by increasing the contrast, defined as the difference in brightness of discrete areas in the visual field and defined as K=(Lh−Ll)/Lh whereby 0=<K=<1, K=0 means there is no contrast while Kmax=1, Lh is the brightness at a discrete area with high luminescence, and Ll is the brightness at a discrete area with low luminescence.

    6. The algorithm of claim 1 wherein: Said algorithm modulates the visual field by changing the focal length of the camera.

    7. The display system of claim 1 wherein: Said system modulates the visual field by inverting the brightness of light and dark areas of the visual field.

    8. The display system of claim 1 wherein: Said microdisplay is one of a group of Spatial Light Modulator (SLM), including, but not limited to, LCD, LCOS, Micromirror and MEMS display, and OLED.

    9. The image-capture and display system of claim 8 wherein: Said system includes a modulator system having a video data processing circuit wherein image data including at least one of still and moving images flows from (1) one of said image sensor(s) and external source to (2) said video processing unit to (3) said control system to (4) said microdisplay which converts the video data into an image which is projected to (5) said see-through optics and projected to (6) the user's visual field, and Said control system is capable to flow data in unilateral and bilateral.

    10. The image-capture and display system of claim 9 wherein: Said modulator increases color differentiation in image captured by image sensor or external video source by modulating color content of the image data.

    11. The image-capture and display system of claim 9 wherein: Said visual display system increases color differentiation by selectively increasing the brightness of at least one color within said light source.

    12. The image-capture and display system of claim 9 wherein: Said visual display system increases color differentiation by selectively increasing the time composition of at least one color within at least one of said light source and microdisplay.

    13. The image-capture and display system of claim 9 wherein: Said image sensor is able to sense infrared light and said processor is able to modulate the video image data in a manner that the user can differentiate between objects in the absence of light in the visible wavelengths.

    14. The image-capture and display system of claim 9 wherein: Said image sensor can modulate the video image data by increasing the brightness of at least one of the entire visual field and specific objects within the visual field.

    15. The image-capture and display system of claim 9 wherein: Said image modulation system identifies objects in the captured image at varying focal distances and modulates the object image area to appear at a different focal distance.

    16. The image-capture and display system of claim 9 wherein: Said image modulation system recognizes specific objects, including, but not restricted to, computer monitors and reading material, and modulates the image in those object areas to a different focal distance.

    17. The image-capture and display system of claim 9 wherein: Said video processing unit consists of multiple components which communicate via at least one of wired and wireless means.

    18. The image-capture and display system of claim 17 wherein: At least one of components of the video processing unit can communicate with an external unit which is separate from the system and communicate data through wire or wireless means.

    19. The display system of claim 17 wherein: The display has an array of pixels and a memory(s) in the pixel and the memories are written line by line in the array by the control system and the sequence of writing the lines is non-sequential.

    20. The display system of claim 17 wherein: The memories in the pixel array of the display are one of SRAM, DRAM, flipflop and cascode circuit.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0050] FIG. 1 shows an example of this invention. (116) is a transparent plate functioning as a wave guide having a hologram layer to enable see-through display. (111) is a camera lens and (112) is a CMOS image sensor module. (115) is a mirror to reflect projected light into the wave guide (116). (118) is a light source, (114) is a projection lens, (113) is a controller electronics and (117) is an eye-glass frame containing a battery.

    [0051] FIG. 2 illustrates that the object (201) is projected to the retina (205). Light 208 is projected from the object (201) and is led to cornea (202) and lens (204). The ciliary muscle (203) adjusts the lens (204) to focus the light beam (207) onto the retina (205) and fovea (206).

    [0052] FIG. 3 shows how a viewer with normal vision sees the images. The large characters 301, 302, 303) can be seen and the small character (304) becomes difficult to read.

    [0053] FIG. 4 shows how a viewer with presbyopia sees the images. Even the large character (401) is not focused on the retina.

    [0054] FIG. 5 shows the usage of concave lens (501) to correct for myopia (inability to focus on far objects). Both far object (502) and near object (503) can be focused.

    [0055] FIG. 6 shows the usage of convex lens (601) to correct for the near distances in presbyopia. The near object (603) can be focused, but the far object (602) cannot be focused.

    [0056] FIG. 7 shows that more sophisticated optics were introduced by the bifocal lens, whereby the upper half of the lens is constructed to assist viewers for far distance view (702), while the lower half of the lens is constructed to assist viewers for near distance view (703). This enables a user with presbyopia to view both near and far with a single pair of glasses. FIG. 7 shows an example of progressive lens (701) simultaneously correcting for near (703) and far (702) distances, albeit near and far distance focus is restricted to lower and upper visual fields, respectively.

    [0057] FIG. 8 shows that a bifocal lens enables a user to read a scorecard, but prohibits the viewer from focusing on a golf ball (801) when taking a shot.

    [0058] In FIG. 9, the larger rectangular frame (900) represents a hypothetical visual field. In said field, four objects are in view, two near and two far. Conventional bifocal lens restrict the focal distances of objects to the upper and lower fields, and therefore objects 1 (901) and 2 (902) can be seen, but object 3 (903) and 4 (904) are out of focus.

    [0059] In FIG. 10, the larger rectangular frame (1000) represents a displayed field wherein all the images are captured by the camera (111 and 112) attached to the wearable display in FIG. 1 and all the captured images are individually focused and displayed in a same distance for the viewer, so that the viewer can see all images in focus.

    [0060] FIG. 11 illustrates the structure of human eye, wherein (1101) is a lens, (1102) and (1104) are Rods which sense brightness and (1103) is Cones which sense three colors.

    [0061] FIG. 11A shows a microscopic image of Rods and Cones. Cones have three different types. The first type of Cones is to sense long wavelength of light (red) and the second is to sense middle wavelength of light (green) and the third is to sense short wavelength of light (blue).

    [0062] FIG. 12 shows the sensitivity curves (1201, 1202 and 1203) of each type of Cones to the wavelength of light. For example, the first type of Cones absorbs the light energy with the sensitivity curve of L (1201) having wavelength between about 500 nm and 650 nm with its peak at 560 nm and converts its photon energy to chemical energy and transfer to brain through the nerve system. The second type of Cones absorbs light energy with the sensitivity curve of M (1202) and converts photon energy around 530 nm (green). The third type of Cones does the same with the curve of S (1203, blue). This means that the function of the first type of Cones is to sense primarily red light and the second is green and the third is blue. If the first type of Cones is unable to function, the viewer will have color blindness of red or Protanomaly or Protanopia depending of the extent. If the second type of Cones has deficit, it will cause color blindness of green or Deuteranomaly or Deuteranopia depending on the extent. The third type is color blindness of blue or Tritanomaly or Tritanopia.

    [0063] FIG. 13 shows the population of color blindness. 92% of people are normal. The largest number of color blind patients is Deuteranomaly (2.7%) and Deuteranopia (0.59%), then Protanomaly (0.66%) and Protanopia (0.59%), Tritanopia (0.016%) and Tritanomaly (0.01%) follow. The color bars show how patients in each category will see the colors. Complete color blindness is less than 0.0001%. The majority of color blindness can be corrected by enhanced vision system except complete color blindness.

    [0064] FIG. 14 shows the patterns used for color blindness test. Normal vision sees the patterns (1401) which has red character of “6” over the background of yellow, green and blue and the pattern (1405) having green character of “74” over red and yellow background. Protanopic and Deuteranopic vison cannot discriminate red and green, therefore cannot see these characters as shown in (1402, 1403, 1406 and 1407), although Tritanopic vision can read these as shown in (1404 and 1408).

    [0065] FIG. 15 shows another example to show how images are perceived by each type of color blindness. The image (1501) is by Normal Vision. The image (1504) is by Protanopic Vision which loses red and a large part of green, because the sensitivity of the first type of Cone photoreceptor is overlapping from red to green. The image (1508) is by Deuteranopic Vision which loses green and a large part of red. The image (1510) is by Tritanopic Vision which loses blue.

    [0066] FIG. 16 shows the Field of View (or FOV) of human eyes. Human eyes can see an image in high resolution and in color only in the central area of field of view as shown in the green area (1605), but eyes can see very wide angle view in lower resolution and without color as show in the blue area which is as wide as 180 degrees horizontally from +90° (1607) to −90° (1604) and 120 degrees vertically from +50° (1606) to −70° (1608).

    [0067] FIG. 17 shows an example of this invention with a hypothetical visual field with multiple objects with varying focal distances. The camera (1701) captures the objects (901, 902, 903 and 904 in FIG. 9) in various distances and auto-focuses at each objet and captures the focused images. The display will show all focused images (1001, 1002, 1003 and 1004 in FIG. 10) in the field of the display (1702).

    [0068] FIG. 18 illustrates an example of this invention wherein the video signal is modulated to enhance video image to a viewer 1) who needs the images of individually focused objects regardless of distances and with adjusted size and brightness of image (presbyopia, myopia or hyperopia) or 2) who needs strengthened color to correct color blindness or 3) who needs visualized images in a darkness (night-vision). (1801) is a visual sensory such as a camera with CMOS image sensor and 1802 is a processor to modulate the images from the camera to provide a viewer of the above 1) and/or 3) with modulated image signals of individually focused objects regardless of distances and with adjusted size and brightness of images and to provide a viewer of the above 2) with strengthened color to correct color blindness. The display system (1803) shows said modulated images to the viewer.

    [0069] FIG. 19 illustrates an example of this invention wherein the video signal from the camera (1901) to the processor (A) (1902) is analyzed for focus and brightness and feedback to the camera (1901) so that the images of individually focused objects are captured with adjusted brightness. The processor (A) (1902) transmits the data of the images to the Processor (B) (1904) of an external unit such as a cellphone which has a more powerful processor than that of the wearable display through wireless transmission (1903) such as electromagnetic wave or modulated light. Often video data processing requires high computation and consumes more energy which the battery of wearable display cannot support. The external processor (B) (1904) processes the data and return to another processor (C) (1906) in the wearable display and the Processor (C) will transfer data to the display (1907) in the wearable display through wireless transmission (1905).

    [0070] FIG. 20 illustrates an example of this invention wherein some or all of the chips on a wearable display are packed in a single SOC (system on chip) or single scale package or single die package.

    [0071] FIG. 21 shows an example of a face mount display made by Olympus, “Eye-Trek”. This completely obstructs view from the viewer.

    [0072] FIG. 22 shows a head mount display, HMZ-T2 by Sony which is a wearable display that is completely opaque.

    [0073] FIG. 23 shows an example of wearable display with see-through optics with half mirrors. The light transmission is less than 50% and the image becomes dark.

    [0074] FIG. 24 shows an example of wearable glasses with display and camera. Glass by Google as shown in FIG. 24, and MEG 4.0 by Olympus are both examples of wearable displays that cover a minor area of the visual field. The displays are meant to be worn while conducting activities of daily living, however, the majority of the visual field is unobstructed and therefore the users will have no issues in perceiving peripheral cues while using these products.

    [0075] FIG. 25A shows an example of digital Pulse-Width-Modulation (PWM) of brightness. Analog brightness control used to be popular for analog display devices such as CRT and LCD. Analog brightness control uses analog control of driving voltage or current of display devices to control brightness. However precise control of brightness is difficult with analog control and digital brightness control provides more accurate in other words higher grayscale brightness control is possible. Instead of changing the duty ratio of pulse width, binary PWM shown as in FIG. 25A is becoming more popular, because digital video signal can be directly used as ON pulse with “1” and OFF pulse with “0”. FIG. 25A shows an example of 8 bit binary PWM wherein the entire frame time is divided into 8 pulses whose pulse widths are ½ of the frame time as D0 (Most Significant Bit or MSB, 2501), ¼ of frame time as D1(2502), ⅛ as D2, 1/16 as D3, 1/32 as D4, 1/64 as D5, 1/128 as D6 and 1/256 as D7 or Least Significant Bit (LSB, 2503). FIG. 25B shows an example of 8 bit binary PWM with the data of 10101001 in binary which is 169 in decimal and it represents the brightness of 169/256=66% of peak brightness. The first 1 means D0 or MSB (2504) and ½ of the frame time must be ON or peak brightness. The “0” at D1 (2505) means the next ¼ of frame time must be OFF meaning zero brightness. This process continues to D7 (Lease Significant Bit or LSB, 2506). Thus any brightness with integer multiplication of LSB (= 1/256) from 0 to 1 can be shown with 8 bit binary PWM. However Sequential order from MSB to LSB requires very high band width of signal transfer lines. FIG. 25C shows an example of non-sequential order of data transfer which reduces the band width requirement of signal transfer. The details of non-sequential data transfer are described in U.S. Pat. No. 8,228,595, Ishii et. al.

    DESCRIPTION OF PREFERRED EMBODIMENTS

    [0076] This invention seeks to create such a visual sensory and display system via a visual image data flow as depicted in FIG. 17 through 20. Cameras are mounted onto a set of glasses pointed in-line with the user's visual field. The cameras convert visual images into image data, which is then sent to a modulation system where the image data is divided into specific focal distances. The modulation system may relay this information back to the camera to recapture the image through an optical focusing system, or the modulator may focus the object through digital algorithms. The modulator will ultimately output digital image data with objects with focal distances for multiple objects recalibrated to a distance that the viewer can readily perceive.

    [0077] FIG. 17 shows an example of the embodiments of this invention with a hypothetical visual field with multiple objects with varying focal distances. The camera (1701) captures the objects (901, 902, 903 and 904 in FIG. 9) in various distances and auto-focuses at each objet and captures the focused images. The display will show all focused images (1001, 1002, 1003 and 1004 in FIG. 10) in the field of the display (1702).

    [0078] FIG. 18 illustrates an example of the embodiments of this invention wherein the video signal is modulated to enhance video image to a viewer 1) who needs the images of individually focused objects regardless of distances and with adjusted size and brightness of image (presbyopia, myopia or hyperopia) or 2) who needs strengthened color to correct color blindness or 3) who needs visualized images in a darkness (night-vision). (1801) is a visual sensory such as a camera with CMOS image sensor and (1802) is a processor to modulate the images from the camera to provide a viewer of the above 1) and/or 3) with modulated image signals of individually focused objects regardless of distances and with adjusted size and brightness of images and to provide a viewer of the above 2) with strengthened color to correct color blindness. The display system (1803) shows said modulated images to the viewer.

    [0079] This invention seeks to create the aforementioned visual sensory and display system in the shape of common glasses (lens(s), nose piece, and ear brace(s)) that is light weight and comfortable to wear. To accomplish this, it may become necessary to divide the modulation component depicted in FIG. 18 into three sections, Processor (A), (B), and (C) as depicted in FIG. 19. The purpose of this division is to allow for superior computing power in Processor (B) to be made external to the glasses, while the camera(s) and display(s) are still fitted into the glasses.

    [0080] FIG. 19 illustrates an example of the embodiments of this invention wherein the video signal from the camera (1901) to the processor (A) (1902) is analyzed for focus and brightness and feedback to the camera (1901) so that the images of individually focused objects are captured with adjusted brightness. The processor (A) (1902) transmits the data of the images to the Processor (B) (1904) of an external unit such as a cellphone which has a more powerful processor than that of the wearable display. Often video data processing requires high computation and consumes more energy which the battery of wearable display cannot support. The external processor (B) (1904) processes the data and return to another processor (C) (1906) in the wearable display and the Processor (C) will transfer data to the display (1907) in the wearable display. The data transmissions between Processor (A) and Processor (B) (1903) and between Processor (B) and Processor (C) (1905) are from a group of wireless, wired and fiber optic.

    [0081] FIG. 20 illustrates an example of embodiment of this invention wherein some or all of the chips on a wearable display are packed in a single SOC (system on chip) or single scale package or single die package.

    [0082] Another example of the embodiments of this invention is that Processor (B) (1904 in FIG. 19 or 2009 in FIG. 20) is connect to the internet to allow for internet data to be displayed on the glasses.

    [0083] Another example of the embodiments of this invention is that Processor (A) (2002) and Processor (C) (2004 in FIG. 20) communicate directly.

    [0084] Another example of the embodiments of this invention is that the communications between processors ((A) and (B), (B) and (C), and (A) and (C)) in FIG. 19 and FIG. 20 are unidirectional or bidirectional.

    [0085] Another example of the embodiments of this invention is that the image capture and display apparatus are battery powered, or receive power from an external source via wired or wireless power transfer.

    [0086] Another example of the embodiments of this invention is that the image capture and display apparatus have a single or multiple audio input(s) and output(s) to allow for user instructions to Processor (A), (B), and (C) in FIG. 19 or FIG. 20, and also for transfer of information from the Processor (A), (B), and (C) to the user.

    [0087] Another example of the embodiments of this invention is that the image capture and display apparatus has a safety feature which comprises of a design that allows a margin outside the projected visual field if the projected visual field exceeds 13 degrees from center with a front-of-eye lens apparatus with more than 60% transparency.

    [0088] An example of the embodiments of this invention is shown in FIG. 1. Optical element such as lens with holographic optical element (HOE) or diffractive optical element (DOE) is shown at (116). A camera is shown at (111). A Free-Form Prism/Mirror is shown at (115). A microdisplay is shown at (114) and a light source is shown at (118). A set of batteries is shown at (117). A controller circuitry is shown at (113).

    [0089] Color blindness is defined as the ability to differentiate discrete areas of the visual field varying wavelengths of light: approximately 564-580 nm, approximately 534-545 nm, and approximately and 420-440 nm. These ranges are approximate as shown in FIG. 12; physiologic sensitivities of cone cells have a distribution that exceeds these wavelengths. FIG. 14 illustrates an example of a test apparatus for color blindness. Ishihara Color Blindness Test is an internationally accepted form of testing color blindness and the standard viewer is able to score 100% while any deviation is considered a form of color blindness. The apparatus shall modulate the cumulative amount and mixture of light emitted from the display to increase or maximize (100% is maximum) the score on the Ishihara Color Blindness test, or increase the ability to differentiate colors in the three ranges of wavelength described here (approximately 564-580 nm, approximately 534-545 nm, and approximately and 420-440 nm). The algorithm to modulate the displayed image shall vary the total light emission from the display and the mixture of colors (wavelength of light) emitted.

    [0090] Visual Acuity is defined as the ability to differentiate objects at a distance.


    Acuity=1/(gap size [arc min])

    The standard viewer has a visual acuity of 1.0, and therefore is able to differentiate objects at 1 arc min ( 1/60 of degree). Visual acuity less than 1.0 is considered a deficiency in visual acuity. A comparison of 304 and 404 demonstrates a loss of visual acuity whereby in 304, the horizontal lines of the letter E can be differentiated while in 404 the lines cannot. To provide a conceptual description: given a situation whereby the standard viewer perceives 304, and an individual with deficiency in visual acuity as described above perceives 404, the apparatus shall enable the individual with deficiency in visual acuity to perceive 304. For a more formal definition, the apparatus shall enable a viewer to increase visual acuity as described as 1/(gap size [arc min]).

    [0091] The algorithm to modulate the image shown on the apparatus shall combine two elements: (1) magnification of an object in question and (2) increase in contrast. Magnification is defined as an increase in the horizontal and vertical visual arc required by the object in question. Contrast (K) is difference in luminescence of bright (Lh) and dark (Ll) visual regions defined as:


    K=(Lh−Ll)/Lh with 0=<K=<1.

    [0092] K=0 means there is no contrast while Kmax=1.

    [0093] The apparatus shall provide an option to invert black and white of a field of view. Although the mathematical differences in contrast remain unchanged with the inversion of dark and light areas of the visual field, the eye is trained to detect small areas of light in a background of dark far better than a small area of dark in a background of light.

    [0094] The apparatus shall increase visual acuity (defined as 1/gap size [arc min]) in an individual with a deficiency in visual acuity (defined as visual acuity less than 1.0) by an algorithm using at least one of (1) increasing the magnification of the object in question (defined as an increase in the horizontal and vertical arc lengths of an object in the visual field) and (2) increasing the contrast (K defined as (Lh−Ll)/Lh). The apparatus shall provide an option to invert light and dark (black and white) areas depending on the preference of the user.

    [0095] Conditions exist whereby visual acuity (1/(gap size [arc min]) is deficient for objects with a near focal length (defined here as the distance between the object and the viewer less than 1 m) and far focal length (defined here as the distance between the object and the viewer more than 1 m). The area outside the circle in FIG. 5 illustrates deficient visual acuity at far focal length, corrected with a concave lens (area inside circle). The area outside the circle in FIG. 6 illustrates deficient visual acuity at near focal length, corrected with a convex lens (area inside circle). Conceptually, the apparatus shall enable the viewer with deficiency of visual acuity to perceive near objects in a manner similar to the area inside the circle in FIG. 6, while a viewer with deficiency of visual acuity to perceive far objects in a manner similar to the area inside the circle in FIG. 5.

    [0096] Given a deficiency in visual acuity that is dependent on distance from viewer to object, the apparatus employ an algorithm that varies (1) the focal length of the camera depending on the distance from viewer to the object, (2) the magnification of the object in question, and (3) the contrast of the emitted display image, to maximize visual acuity.