Methods and Apparatus for Contrast Sensitivity Compensation
20230072493 · 2023-03-09
Inventors
Cpc classification
International classification
Abstract
A system and methods for contrast sensitivity compensation provides for correcting the vision of users whose vision is deficient for discerning high spatial frequencies. The system and methods use measurements of the user's contrast detection as a function of spatial frequency in the image to correct images in real time. The system includes a head-mountable device that includes a camera and a processor that can provide enhanced images at video framing rates.
Claims
1. A method of providing enhanced video images to a user using a programmable electronic device, the method comprising: obtaining input video images comprising a plurality of input images A; computing, in the programmable electronic device, the application of a contrast enhancement function to the plurality of input images A to form a plurality of contrast enhanced images comprising contrast enhanced video images C where the contrast enhancement function is user-specific and where the contrast enhancement function is frequency-dependent, and presenting, on a display of the programmable electronic device and to the user, the contrast enhanced video images C, such that the contrast enhanced video images are preferentially enhanced at spatial frequencies discernable to the user.
2. The method of claim 1, where the contrast enhancement function depends on a contrast sensitivity function, CSF.sup.p(u) obtained for the user p corresponding to minimum discernable contrasts to the user as a function spatial frequency u; and a spatial frequency cut-off for the user c.sub.p, where c.sub.p is the maximum spatial frequency at which the user can discern contrast.
3. The method of claim 2, further comprising: obtaining the CSF.sub.p(u) and the c.sub.p from a vision test of the user.
4. The method of claim 2, where the contrast enhancement function further depends on a contrast sensitivity function CSF.sub.n(u) for persons n having normal contrast sensitivity, and a spatial cut-off frequency, c.sub.n, where, c.sub.n is the maximum spatial frequency at which the user can discern contrast.
5. The method of claim 4, where each image of the plurality of input images includes a luminance image, Y(x, y) and chrominance images C.sub.B(x, y), C.sub.R(x, y), where the contrast enhancement function is
C=C(x,y)=[Y′(x,y),C.sub.B(x,y),C.sub.R(x,y)].
6. A method of providing enhanced images to a user using a programmable electronic device, the method comprising: obtaining input video images comprising a plurality of input images A, where each image of the plurality of input images includes a luminance image Y(x, y) and chrominance images C.sub.B(x, y), C.sub.R(x, y); forming a contrast enhancement function CEF.sub.p(u), as
C(x,y)=[Y′(x,y),C.sub.B(x,y),C.sub.R(x,y)]; and presenting, on a display of the programmable electronic device and to the user, the contrast enhanced video images C=C(x, y), such that the contrast enhanced video images are preferentially enhanced at spatial frequencies discernable to the user.
7. The method of claim 6, where CSF.sub.p(u) is obtained from a measurement of the vision of the user viewing target images, where the target images each have a contrast and a spatial frequency, where CSF.sub.p(u) is a minimum discernable contrast as a function of target image spatial frequency.
8. The method of claim 7, further comprising: testing the vision of the user to obtain CSF.sub.p(u).
9. A contrast sensitivity compensation system wearable by a user, the system comprising: a memory including a stored program; a camera mounted on the user aimed to view the scene in front of the user and operable to obtain input video images of the scene comprising a plurality of input images A; a processor programmed to execute the stored program to compute the application of a contrast enhancement function to the plurality of input images A to form a plurality of contrast enhanced images comprising contrast enhanced video images C, where the contrast enhancement function is user-specific, and where the contrast enhancement function is frequency-dependent; and present, to the user on a display of the programmable electronic device, the contrast enhanced video images C−C(x, y).
10. The contrast sensitivity compensation system of claim 9, where the contrast enhancement function depends on a contrast sensitivity function CSF.sub.p(u) obtained for the user p corresponding to minimum discernable contrasts to the user as a function spatial frequency u and a spatial frequency cut-off for the user c.sub.p, where c.sub.p is the maximum spatial frequency at which the user can discern contrast.
11. The contrast sensitivity compensation system of claim 10, where the CSF.sub.p(u) and the c.sub.p are obtained from a vision test of the user.
12. The contrast sensitivity compensation system of claim 10, where the contrast enhancement function further depends on a contrast sensitivity function CSF.sub.n(u) for persons n having normal contrast sensitivity, and a spatial cut-off frequency c.sub.n, where c.sub.n is the maximum spatial frequency at which the user can discern contrast.
13. The contrast sensitivity compensation system of claim 12, where each image of the plurality of input images includes a luminance image, Y(x, y) and chrominance images, C.sub.B(x, y), C.sub.R(x, y), where the contrast enhancement function is
C−C(x,y)−[Y′(x,y),C.sub.B(x,y),C.sub.R(x,y)].
14. (canceled)
15. (canceled)
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036] Reference symbols are used in the Figures to indicate certain components, aspects or features shown therein, with reference symbols common to more than one Figure indicating like components, aspects or features shown therein.
DETAILED DESCRIPTION OF THE INVENTION
[0037] Certain embodiments of the present invention are directed to an apparatus to provide images that enhance the vision of users having a loss of contrast sensitivity of high spatial frequencies. The apparatus presents modified images that enhance the contrast, specifically for high spatial frequencies, to correct for deficiencies in a user's visual system. Certain other embodiments enhance images within the discernable spatial frequency range of the user.
[0038] By way of a specific embodiment,
[0039] Goggles 120 include a body 122 and a strap 125 for holding the goggles on the user's head and a connector 128 that mates with smartphone connector 117. Body 122 includes, as shown in
[0040] In certain embodiments, smartphone 110 is provided with programming, as through a contrast sensitivity compensation application (referred to herein as a “CSC App”) which can: 1) operate camera 111 in a video mode to capture a stream of “input images”; 2) perform image processing on each input image to generate a stream of “output images”; and 3) present the stream of output images to screen 113. In certain embodiments, the stream of output images is presented sequentially side-by-side as two identical images—one in area 112 and one in area 114. Further, it is preferred that contrast sensitivity compensation system 100 operate so that the time delay between when the input images are obtained and when the output images are provided to screen 113 be as short as possible so that a user may safely walk and interact with the environment with goggles 120 covering their eyes.
[0041] Contrast sensitivity compensation system 100 has adjustable features that allow it to match the physiology of the user for use in different settings. These features are generally set once for each user, possibly with the need for periodic adjustment. Thus, for example, given the spacing between screen 113 and the eyes of user U, focusing wheel 127 permits for an optimal setting of the distance from the display (113) to lens 124 and 126. In addition, lens 124 and/or 126 may include refractive error correction. Further, it is important that the viewed spacing between the images in areas 112 and 114 match the user's interpupillary distance (IPD) to facilitate comfortable binocular viewing and preventing diplopia. This may be accounted for, by example, by shifting the spacing of the output images in areas 112 and 114 to match the IPD. Certain embodiments, described subsequently, include eye tracking to determine a user's gaze direction. For these systems, it is sometimes necessary to calibrate the system to obtain a correlation between the eye tracking measurement and actual gaze direction.
[0042] In various embodiments, the user may adjust setting using: input device 123 which may be a touchpad and which is electrically connected to smartphone 110, which is further programmed to modify the CSC App according to such inputs; a Bluetooth game controller that communicates with the smartphone 110 via Bluetooth; voice control using the microphone of the phone; gesture control using available devices such as the NOD gesture control ring (see, for example, http://techcrunch.com/2014/04/29/nod-bluetooth-gesture-control-ring/); or by the user of an eye tracker to implement gaze-directed control.
[0043] In addition, there are other features of contrast sensitivity compensation system 100 that can either be set up once for a user or may be user-adjustable. These features may include, but are not limited to, adjustments to the magnitude, shape, size, or placement of magnified portions of the output image, and color enhancement functions such as contrast, blur, ambient light level or edge enhancement of the entire image or portions of the image. In other embodiments, the compass and/or accelerometers within smartphone 110 may be used for enhancing orientation, location, or positioning of output images.
[0044] In certain embodiments, sound and/or vibration may be provided on smartphone 110 to generate for proximity and hazard cues. In other embodiments, the microphone of smartphone 110 can be used to enter voice commands to modify the CSC App. In certain other embodiments, image stabilization features or programming of smartphone 110 are used to generate output images.
[0045] In one embodiment, by way of example only, goggles 120 are commercially available virtual-reality goggles, such as Samsung Gear VR (Samsung Electronics Co. Ltd., Ridgefield Park, N.J.), and smartphone 110 is a Galaxy S8 (Samsung Electronics Co. Ltd., Ridgefield Park, N.J.). The Samsung Gear VR includes a micro USB to provide an electrical connection to the Galaxy Note 4 and has, as input devices 123, a touchpad and buttons.
[0046] It will be understood by those in the field that contrast sensitivity compensation system 100 may, instead of including a combination of smartphone and goggles, be formed from a single device which includes one or more cameras, a processor, display device, and lenses that provide an image to each eye of the user. In an alternative embodiment, some of the components are head-mounted and the other components are in communication with the head-mounted components using wired or wireless communication. Thus, for example, the screen and, optionally, the camera may be head-mounted, while the processor communicates with the screen and camera using wired or wireless communication.
[0047] Further, it will be understood that other combinations of elements may form the contrast sensitivity compensation system 100. Thus, an electronic device which is not a smartphone, but which has a processor, memory, camera, and display may be mounted in goggles 120. Alternatively, some of the electronic features described as being included in smartphone 110 may be included in goggles 120, such as the display or communications capabilities. Further, the input control provided by input device 123 may be provided by a remote-control unit that is in communication with smartphone 110.
[0048] One embodiment of the transformation of camera images into a displayed image is illustrated using an illustrative image 500 in
[0049] To correct for the loss of contrast, user U may wear contrast sensitivity compensation system 100 and run the CSC App with camera 111 directed at the scene of image 500. The CSC App operates camera 111 to obtain image 500, which is processed to generate an image 800B of
[0050] Image 900B of
[0051] To prevent distortions, in addition to performing a spatial frequency dependent contrast adjustment customized to the user's CSF, it is necessary to increase the magnification, followed by a customized contrast adjustment within the envelop of the user's CSF to enhance the image for optimal vision.
Determination of the Contrast Enhancement Function
[0052] In certain embodiments, the CEF as a function of frequency u for a user p (written as CEF.sub.p(u)) is obtained from a subjective measurement how a user's visual system discerns contrast as a function of spatial frequency and then mathematically manipulating the measurement to obtain the CEF. Determination of contrast sensitivity as a function of spatial frequency is known in the art (see, for example, Pelli and Bex, Measuring contrast sensitivity, Vision Res. 2013 Sep. 20; 90: 10-14. doi:10.1016/j.visres.2013.04.015.), and Chung S T et al, Comparing the Shape of Contrast Sensitivity Functions for Normal and Low Vision. Invest Ophthalmol Vis Sci. (2016).
[0053] A useful way of characterizing the sensitivity of the visual system is the contrast sensitivity CS as a function of spatial frequency which is written as CSF.sub.p(u). The CSF.sub.p(u) can be written as a mathematical function or a linear array, as is appropriate for its use. While not meant to limit the scope of the present invention, the CSF and other functions derived from or related to the CSF may be used to calculate a CEF, which may then be used to modify images.
[0054] The variation of contrast sensitivity with spatial frequency is demonstrated in
[0055] The detectible contrast ranges from 0% for no contrast between dark and light, to 100% for a maximum contrast between dark and light. The CSF is the sensitivity and is the inverse of contrast detection with a corresponding range of from −∞ to 1, and the log of CSF has a corresponding range of from −∞ to 0.
[0056] In practice, the CSF may be determined by providing a user with an image or images having differing amounts of contrast and spatial frequency and by having them report on the limits of their contrast detection. Thus, the user is presented with several images each having a single spatial frequency (that is, with light and dark bands having the same spacing) and a contrast (that is, with a certain contrast between the light and dark bands). The user is prompted to indicate which image is at their limit of contrast detection. This is then repeated for several spatial frequencies. The result is a list of contrast detection thresholds for each spatial frequency, which is that user's CSF.
[0057]
[0058] In the examples of
[0059] A useful measure for considering the loss of contrast detection relative to a user with a normal visual system is the contrast attenuation (CA.sub.p) which is the ratio of the value of the CSF.sub.p of a user to the CSF of a user with normal vision, or CA.sub.p=CSF.sub.p/CSF.sub.n. CA.sub.p provides an easy determination how a user with decreased contrast sensitivity views an image relative to how a user with normal contrast sensitivity views an image relative.
[0060] For the examples provided herein, at a spatial frequency less than f*, as indicted by the arrow 410, the contrast attenuation ratio is constant and less than 0—that is, the contrast loss is not size dependent. At a spatial frequency greater than f*, as indicted by the arrow 420, the contrast attenuation ratio decreases (relative contrast sensitivity loss increases) with frequency. It is thus seen that correcting for contrast loss requires a constant enhancement at low spatial frequencies and an increasing enhancement at higher spatial frequencies, as discussed subsequently.
Simulation of a User'S Visual System Using the Contrast Enhancement Function
[0061] In considering the effect of processing images to be viewed by users having contrast loss, it is useful to have a simulation of how particular images appears to such a user. CSF.sub.p is a measure of how a user subjectively views an object and may be used to simulate how an image would appear to a user according to their CSF.sub.p. In discussions of these simulations and the viewing of all transformed images, it is assumed that the reader has normal contrast detection.
[0062] Thus, for example, consider how an image appears to a user with a given CSF.sub.p. Mathematically, an image may be described as a 2-D array A of intensity values. The array A may be viewed, for example, on a computer display and presented as an image A, and the terms “array” and “image” are generally used interchangeably herein.
[0063] The application of a CSF.sub.p to an image is performed by acting upon the CSF.sub.p with Fourier transform of A, {A}, by CSF.sub.p, which may be written as follows:
V.sub.p[A]={A}×CSF.sub.p(u), Eq. 1a
followed by the inverse Fourier transform
B=.sup.−1{V.sub.p[A]}, Eq. 1b
where B is an image obtained by modifying A by CSF.sub.p. In other words, a user whose vision is characterized by a CSF will view the adjustment of image A as image B.
[0064]
[0065] In another example,
[0066] A comparison of CSF.sub.1(u) (curve 303) and image 600A to CSF 305 and image 600B reveals that the values of CSF.sub.2(u) are lower than the values of CSF.sub.1(u), and that image 600B, which corresponds to CSF.sub.2(u), has much less spatial resolution than image 600A, which corresponds to CSF.sub.1(u). Thus, the second user discerns far less detail that does the first user.
Contrast Compensation
[0067] In certain embodiments, a user's loss of contrast sensitivity may be compensated for by adjusting the contrast of an image using the contrast sensitivity data according the normal contrast sensitivity and the cut off frequency.
[0068] In one embodiment, the following contrast compensation method is used to enhance the contrast of image. Each image A(x, y) may be specified in terms of the image's luminance image Y(x, y), and chrominance images C.sub.B(x, y), C.sub.R(x, y), as A(x, y)=[Y(x, y), C.sub.B(x, y), C.sub.R(x, y)]. First, a Fourier transform is performed on the luminance image Y(x, y) to obtain amplitude M.sub.Y(u) and phase P.sub.Y(u) spectra (vs spatial frequency, u). Next, the luminance amplitude is enhanced using the user's contrast enhancement function as follows: M′.sub.Y(u)=M.sub.Y(u)×CEF.sub.P(u). Next, an inverse Fourier transform is performed on the enhanced luminance amplitude and the unaltered luminance phase function to obtain enhanced luminance image Y′(x, y). Lastly, the enhanced luminance image is combined with unaltered chrominance images to obtain enhanced full color image: C(x, y)=[Y′(x, y), C.sub.B(x, y), C.sub.R(x, y)].
[0069] In certain embodiments, the compensation may be accomplished, for example, as follows. A contrast enhancement function for user CEF.sub.p as a function of spatial frequency u is defined as:
This CEF.sub.p provides for enhancement of the contrast at spatial frequencies that the user can discern to make an appropriately magnified image (c.sub.n/c.sub.p) appear to the patient the way the unmagnified image would appear to the normally sighted person.
[0070]
[0071]
[0072] In certain embodiments, image 500 is captured by camera 111 of contrast sensitivity compensation system 100 and the image, along with the user's CEF, is stored in the memory of smartphone 110 that is running the CSC App. The CSC App also includes programming to read image 500, apply the contrast enhancement described above, including Eq. 2a, and provide the transformed images C to screen 113, as noted in
[0073] In certain other embodiments, the application of the CEF by the CSC App to an image may require additional computations or have other limitations. Thus, for example, it is not possible for an image to exceed 100% contrast, and thus the intensity in an image will be clipped at the maximum and minimum if the product of the Fourier transform and the contrast enhancement function exceeds 100% contrast. In addition, the mean luminance must remain fixed in order to not saturate the display with the inverse Fourier transform of the enhanced image. These ceiling and floor corrections are built into the compensation algorithm that generate the images shown above.
[0074] The effectiveness of enhancing the image is illustrated by simulating how the enhanced images appear. This may be accomplished by taking the contrast enhance images (as shown, for example, in
[0075] Magnification of an image must also be used, in conjunction with contrast enhancement, to compensate for a reduction in the cut-off frequency (c.sub.p<c.sub.n), which corresponds to a loss of visual acuity, as well as a loss of contrast sensitivity. Increased magnification shifts the spatial frequency spectrum of the scene down an “octave” or more, so that frequencies below the cutoff become visible, and can be enhanced, for the viewer.
[0076] As the magnification of the image is increased, the CEF changes accordingly to minimize distortion (for magnifications less than c.sub.n/c.sub.p, substitute the magnification for the cut-off frequency ratio in eq. 2). Thus,
[0077] Examples of images transformed by a CSF for an image with a magnification of 4 are presented herein using an illustrative image 1200. The transformed images are similar to the images described with reference image 500.
[0078]
[0079]
[0080] Examples of images which may be provided to individuals according to their CSF to correct for their lack of contrast sensitivity of image 1200 are shown in
[0081] The effectiveness of enhancing the image is illustrated by simulating how the enhanced images appear.
[0082] The effect of magnification and contrast enhancement is seen by comparing simulations of
[0083] It is to be understood that the invention includes all of the different combinations embodied herein. Throughout this specification, the term “comprising” shall be synonymous with “including,” “containing,” or “characterized by,” and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps. “Comprising” is a term of art which means that the named elements are essential, but other elements may be added and still form a construct within the scope of the statement. “Comprising” leaves an opening for the inclusion of unspecified ingredients even in major amounts.
[0084] It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system, electronic device, or smartphone, executing stored instructions (code segments). It will also be understood that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention is not limited to any particular programming language or operating system.
[0085] Further, it is one aspect of the contrast compensation method described herein to enhance images within or near the discernable spatial frequency range of the user's vision. Thus, while examples are provided using the contrast compensation method discussed above, it will be obvious to one skilled in the art that other algorithms or combinations of algorithms may be substituted which approximate this method in that images are enhanced within certain ranges of spatial frequencies. Thus, for example, other contrast enhancement functions, image transformations, and/or methods of characterizing the user's vision, or approximating a characterization of the user's vision fall within the scope of the present invention.
[0086] Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner as would be apparent to one of ordinary skill in the art from this disclosure in one or more embodiments.
[0087] Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
[0088] Thus, while there has been described what is believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.