SYSTEM AND METHOD FOR VISUALIZATION OF OCULAR ANATOMY
20230137387 · 2023-05-04
Inventors
Cpc classification
International classification
Abstract
A system for visualization of eye anatomy includes at least one camera having a view vector along a first axis when in a first position, a housing to which the camera is coupled, wherein the housing engages the head of a patient such that the camera is positioned adjacent a patient's eye, and an actuator that moves the camera from the first position to a second position with a view vector along a second axis that is offset from the first axis. A method of visualization of eye anatomy includes engaging a patient's head with a housing, positioning at least one camera coupled to the housing adjacent an eye, wherein the camera has a view vector along a first axis when in a first position, and moving the camera to a second position with a view vector along a second axis offset from the first axis.
Claims
1-20. (canceled)
21. A method of visualization of eye anatomy, the method comprising: illuminating, using an illumination device emitting electromagnetic radiation in the non-visible spectrum, an eye; capturing, using a camera, a first image of the eye along a first axis; adjusting the relative position of the camera to the eye; capturing, using the camera, a second image of the eye along a second axis, the second axis being offset from the first axis; and combining at least part of the first image and at least part of the second image to create a composite image having a greater field of view than the first image and the second image.
22. The method of claim 21, wherein the illumination device emits electromagnetic radiation in the ultraviolet spectrum, the infrared spectrum, or the near infrared spectrum.
23. The method of claim 21, further comprising displaying the composite image.
24. The method of claim 21, wherein the composite image has a field of view that is at least 180 degrees.
25. The method of claim 24, wherein the composite image has a field of view that is 200 degrees or more.
26. The method of claim 21, wherein adjusting the relative position of the camera to the eye comprises adjusting the relative angle of the camera to the eye.
27. The method of claim 21, wherein adjusting the relative position of the camera to the eye comprises adjusting the relative distance between the camera and the eye.
28. The method of claim 21, wherein adjusting the relative position of the camera to the eye comprises moving the camera in a direction substantially parallel to a vertical axis of the eye.
29. The method of claim 21, wherein adjusting the relative position of the camera to the eye comprises moving the camera in a direction substantially parallel to a horizontal axis of the eye.
30. The method of claim 21, wherein combining the at least part of the first image and the at least part of the second image to create the composite image comprises stitching the at least part of the first image and the at least part of the second image to create the composite image.
31. The method of claim 21, wherein the eye is a first eye, the method further comprising: illuminating, using another illumination device emitting electromagnetic radiation in the non-visible spectrum, a second eye; capturing, using another camera, a third image of the second eye along a third axis; adjusting the relative position of the other camera to the second eye; capturing, using the other camera, a fourth image of the second eye along a fourth axis, the fourth axis being offset from the third axis; and combining at least part of the third image and at least part of the fourth image to create a second composite image having a greater field of view than the third image and the fourth image.
32. The method of claim 31, wherein the camera and the other camera move together as a unit.
33. The method of claim 21, further comprising tracking, based on at least the first image and the second image, movement of at least one structure or material within the eye.
34. The method of claim 21, wherein the composite image is a three-dimension image of the eye.
35. The method of claim 21, further comprising: adjusting the relative position of the camera to the eye; and capturing, using the camera, a third image of the eye along a third axis, the second axis being offset from the first axis and the second axis, wherein combining the at least part of the first image and the at least part of the second image to create the composite image comprises combining the at least part of the first image, the at least part of the second image, and at least part of the third image to create the composite image.
36. The method of claim 21, wherein the camera comprises a first camera that captures an image in a first non-visible spectrum and a second camera that captures another image in a second non-visible spectrum that is different from the first non-visible spectrum.
37. The method of claim 21, wherein the illumination device is a light-emitting diode, a laser, or a fiber optic cable.
38. The method of claim 21, wherein the camera comprises a lens and an imaging sensor.
39. The method of claim 38, wherein the imaging sensor is a CMOS sensor or a CCD sensor.
40. The method of claim 21, wherein the illumination device is positioned adjacent the camera.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
DETAILED DESCRIPTION OF THE INVENTION
[0060] The basic components of one exemplary embodiment of a system for visualization of eye anatomy in accordance with the invention are illustrated in
[0061] The system and method of the present invention uses an imager that is manually or mechanically articulatable by the physician to obtain a wide view angle of at least 180 degrees, thereby allowing examination of the periphery of the retina to detect early signs of diabetic retinopathy. It is understood that the system may also be used to image the eye anatomy for any other therapeutic and/or diagnostic purpose.
[0062] In some cases, it is useful to detect, observe and analyze various tissue deposits in the eye to diagnose and treat various diseases of the eye and other organs. For example, depositions of lipids, crystals, proteins and other artifacts in the eye may provide useful information regarding various diseases of the body. The ability to detect, measure and analyze these deposits by visualizing the anatomy of the eye creates an opportunity to detect and diagnose various diseases of the eye and other organs and systems.
[0063] The present invention can identify changes in the geography of the eye, including atrophy, emaciation, and swelling. Furthermore, the present invention allows for detection and analysis of various conditions of the eye, such as, for example, hydration, innervations, inflammation, circulation, nerve conduction, etc. Each of these conditions is typically caused by one or more diseases, and being able to visualize and measure these conditions in the eye provides very useful information regarding the cause, extent and diagnosis of various diseases of the body.
[0064] As shown in
[0065] As illustrated in
[0066] Any desirable configurations of the cameras (20) and the illumination devices (22) may be provided in accordance with the present invention. Some exemplary configurations are shown in
[0067] The camera (20) may comprise any imaging device suitable for viewing the target area, such as a coherent fiber bundle or appropriate optical element and lens assembly in conjunction with an imaging sensor (e.g., CMOS, CCD), having a sufficiently small outer diameter, preferably about 0.75 mm-2.5 mm, and more preferably about 1 mm or less. For example, the system of the present invention may utilize a proprietary camera, such as is described in U.S. Pat. No. 8,226,601 to Gunday et al. and U.S. Pat. Nos. 8,597,239 and 8,540,667 to Gerrans et al. It is noted that, in some embodiments, only one camera may be used to image the anatomy of patient's one eye.
[0068] One advantageous camera embodiment is illustrated in
[0069] The camera (60) further includes an imaging sensor (68) positioned proximally from the lens (64) and (66). Any type of imaging sensor may be used. The imaging sensor (68) is coupled a sensor mount (70) to fixate the sensor inside the housing. In one advantageous embodiment, a CMOS sensor is used. The housing (62) also has one or more illumination devices (72), e.g. LEDs, lasers, and/or fiber optic cables, positioned distally from the lens. It is understood than other types of illumination devices may be used. The illumination devices emit various types of light, depending on desired application. For example, the illumination devices may emit ambient light, visible spectrum light, ultraviolet light, infrared light, near infrared light, etc. A distal end of the housing (62) has a pupil relay system (74) that seals the distal end of the housing to protect the camera components positioned in the housing.
[0070] It is understood that the camera design illustrated in
[0071] As described above, the system of the present invention allows examination of the eye anatomy using light of various spectrums and various wavelengths. This allows for detection, visualization and characterization of various tissues, structures, and molecular compounds that may be present in the eye, which in turn lead to diagnosis of various eye and body diseases. This is due to the fact that various tissues and structures that may be present in the eye absorb and/or deflect light of various spectrum and/or wavelengths in different ways. Analysis of the light scattering thereby provides information about particular tissues and structures present in the eye. The system of the present invention also allows for detection and characterization of changes in eye anatomy over time, which may be caused by various diseases. The system is capable of measuring color saturation of the light emitted onto the target tissues and also measures scattering of light deflected from the target tissues in the eye.
[0072] As noted above, the system of the present invention may utilize a plurality of illumination devices or light sources. In some embodiments, all of the light sources emit light of the same spectrum/wavelength. In additional embodiments, each of the plurality of light sources emits light of a different spectrum/wavelength than the light emitted by other light sources. This allows for detection and characterization of various structures and conditions inside the eye, as described above.
[0073] In some advantageous embodiments, the system of the present invention utilizes a continuous wave/stream of light. In other advantageous embodiments, the system uses a pulsed light, wherein the light emitting devices positioned on the system adjacent the cameras emit pulses of light at a desired frequency. The cameras may capture image data after each pulse of light, or at particular intervals after a certain number of light pulses. In further advantageous embodiments, the same light sources may emit light in both continuous wave and pulsed waves, as desired, and/or some of the light sources may emit light continuously and other light sources may emit light in pulsed waves.
[0074] Referring back to
[0075] In one advantageous embodiment, the processor (26) is connected to the cameras (20) via a cable or wired connection (28). In additional advantageous embodiments, the processor (26) is connected to the cameras (20) via a wireless, e.g. cellular or satellite, connection (30), which is desirable if a physician is located remotely from a patient, whose eye anatomy is being examined. For example, the system of the present invention may be used by a patient in his or her home to capture images of the eye anatomy and then wirelessly transmit the data to the remotely located physician for analysis. Or the system of the present invention may be used by physicians located in field conditions, such as on a battle field, wherein there is no time or accessibility to analyze the captured eye anatomy data. The physicians utilize the cameras to capture the image data and then send it wirelessly to remote locations for analysis. In further advantageous embodiments, the captured image data may be stored in cloud storage, meaning that the digital data is stored in logical pools, with the physical storage typically spanning across multiple servers managed by a hosting company. This way, the data may be easily access from any location connected to the cloud storage, such as physicians' and patients' personal computers, tablets and smart phones.
[0076] Furthermore, the cameras (20) and/or the processor (26) may be connected to an external storage device, a removable storage device, and/or to an internet port. The image data captured by the cameras is stored on the storage device (44) and may be later retrieved by a user. In other advantageous embodiments, the processor (26) may have an internal storage device. Any suitable storage device may be used in accordance with the present invention.
[0077] In some embodiments, the image data is compressed before it is transmitted to the processor for processing or stored. In other words, the imaging data is encoded using fewer bits than the originally captured data to reduce resource usage, such as data storage space or transmission capacity. Once the compressed data is received by the processor, it is decompressed before it is displayed to the user to maintain the original quality of the captured images.
[0078] The system (10) may further include a display (32) coupled to the processor (26) via a cable connection (34) or via a wireless connection (36). The display (32) receives imaging data processed by the processor (26) and displays the image of the person's eye anatomy in 2-D format and 3-D format to a physician. Any suitable type of a display may be used in accordance with the present invention.
[0079] In one advantageous embodiment, such as shown in
[0080] The system (10) of the present invention further includes an actuator coupled to each camera for moving the camera in different directions. In particular, the camera can be moved from a first position, in which the camera has a view vector along a first axis, to a second position, in which the camera has a view vector along a second axis that is offset from the first axis, as further discussed below in reference to
[0081] In one advantageous embodiment, the actuator is capable of moving the cameras in a direction substantially parallel to at least one of a horizontal axis (50) of the eye (46) and a vertical axis (48) of the eye (46), as shown in
[0082] As shown in
[0083] As shown in
[0084] As shown in
[0085] In some advantageous embodiments, one or more mosaic cameras are used to capture an image of the eye anatomy. The mosaic cameras have the structure described above or any other suitable structure. Each camera (340, 350) captures an image (310, 320) of the eye anatomy. The captured images are then sent to the processor, which processes the image data and displays the image to the user on a display. The images (310, 320) from each camera are laid over one another to produce a single image (330), as shown in
[0086] In additional embodiments, images captured by two or more cameras are “stitched” together when displayed to the user to provide for a more detailed image of the eye anatomy. For example, as shown in
[0087] As shown in
[0088] One or more cameras (20) of the present invention may capture multiple images of the eye by “scanning” the eye. In other words, the camera (20) moves across the eye taking a series of consecutive images with an autofocus. These images are then displayed separately, or overlaid over one another to provide a 3D image, or are combined to provide a composite image, as described above. In one embodiment, the camera (20) moves in a direction substantially parallel to a horizontal axis of the eye and/or a vertical axis of the eye, as shown in
[0089] Two or more cameras may also be used to “scan” the eye. In some embodiments, two cameras are used wherein each camera is positioned at a different angle towards the eye. The cameras may start in a position wherein their view vectors overlap inside the eye, such as shown in
[0090] In another advantageous embodiment of the present invention, stereo camera is used to visualize the eye anatomy. The stereo camera includes two or more lenses, each with a separate image sensor. This allows the camera to simulate human binocular vision, making it possible to capture three-dimension images. The two or more image sensors are CMOS type sensors or any other suitable sensors, used together with one or more illumination sources. Each of the sensors captures an image from a different angle/position with respect to the eye. Then, the images are processed and displayed to the user as a single 3D image.
[0091] The device of the present invention utilizes a tracking system to track the motion of the eye and within the eye to adjust the cameras (20) to always obtain a clear and accurate image of the eye anatomy. Two different types of the tracking system are used. A first tracking system utilizes one or more cameras that track the motion of the eyeball itself. In order words, when the patient moves hers/his eyeball to look in a different direction, the cameras automatically adjust to that movement. This is accomplished by locating and recording certain landmarks or biomarkers within the eye, such as, for example 3 o'clock and 5 o'clock positions of the pupil, and then tracking the movement of those landmarks or biomarkers to determine a new position. Any suitable tracking mechanism may be use to accomplish this step. Then, the cameras and/or the entire housing moves to adjust the position with respect to the eye. It is understood that any other suitable tracking points or landmarks within the eye may also be used in this system.
[0092] A second tracking system tracks any motion within the eye. For example, the system tracks dilation/contraction of an iris and/or pupil of the eye, or movement of protein or crystalline material or other structures within the eye. This is accomplished by focusing one or more cameras on a particular structure or tissue in the eye and then automatically adjusting the position of the cameras as the structures/tissues move within the eye to maintain a laser focus on moving structures and/or tissues. Again, any suitable tracking mechanism is used for this step. In advantageous embodiments, both the first and second tracking systems are used in combination to track movement of the eye and structures within the eye to obtain a clear and accurate image of the eye anatomy.
[0093] Various suitable actuators may be used with the system (10). In the embodiment shown in
[0094] The actuator further includes a controller (52) that communicates with the coupling members (16) and/or track (24) to enable manipulation of the cameras (20) by a user/physician. The controller (52) may communicate via a cable connection (51) or wirelessly (53). In some advantageous embodiments, the controller (52) actuates each of the coupling members (16) individually or as a unit. In additional advantageous embodiments, each of the cameras (20) has a separate actuator and is actuated separately from the other cameras.
[0095] One exemplary embodiment of the coupling member (16) is shown in
[0096] A distal end of the housing (82) has a ball-like shape and mates with a socket-like portion (84) of the coupling member. This ball and socket configuration of the actuator enables rotary movement of the cameras in all directions, as shown in
[0097] As discussed above and also shown in
[0098] When in use, the system (10) is positioned over the person's eye(s) and the cameras are placed adjacent to the eye(s). In some advantageous embodiments, a diameter of the iris is measured via any suitable measurement device. Data about the measured diameter is transmitted to a processor to determine a target opening. Based on this data, the processor then sends information to a controller for controlling actuation of the cameras to obtain wide angle view images of the eye anatomy.
[0099] As shown in
[0100] In other embodiments, a dynamic image (98) is shown on the screen (90), and a person is instructed to follow the movement of the image on the screen. While the person's eye(s) are following the dynamic object (98), one or more cameras are also actuated to move around and obtain various angles of view of the eye anatomy. Again, the cameras may be actuated separately at different angles or may move together as a unit. In one advantageous embodiment, the system may utilize software that enables the cameras to follow the image from the screen that is reflected from the eye(s).
[0101] Once the imaging data is captured by one or more cameras, the data is transmitted to the processor for processing. Then, the processed image data is transmitted to the display for viewing by the physician. In some advantageous embodiments, the image data is stored on the storage device for later retrieval.
[0102]
[0103] As shown in
[0104] Once the housing (110) is positioned at a desired distance from the eye(s), the camera housing (140) is actuated in directions substantially parallel to the vertical and horizontal axes of the eye to get a wide angle view of the eye anatomy. The actuation is controlled by the internal or external controller, as discussed above. While the camera housing (140) is being actuated, the chin rest (120) and the forehead rest (125) remain stationary to maintain the person's head (130) and eye(s) in the same position. If two camera housings are provided, each of the housings may move separately from the other, or the two housings may move together as a unit.
[0105] In some advantageous embodiments, as discussed in more detail above in connection with
[0106] Yet another exemplary embodiment of the system for visualization of eye anatomy of the present invention is shown in
[0107] The camera housing part (235) has one or more cameras (240) and one or more illumination devices (250) positioned therein, as shown in
[0108] As shown in
[0109] Once the camera housing (235) is positioned at a desired distance from the eye(s), the camera housing (230) may be actuated in a direction substantially parallel to the vertical axis of the eye, as shown in
[0110] It should be noted that, while only certain movements of the cameras (20) are described when discussing the illustrations of particular embodiments, any combination of the camera movements described in
[0111] It should be understood that the foregoing is illustrative and not limiting, and that obvious modifications may be made by those skilled in the art without departing from the spirit of the invention. Accordingly, reference should be made primarily to the accompanying claims, rather than the foregoing specification, to determine the scope of the invention.