IRIS REGISTRATION METHOD AND SYSTEM
20250228708 ยท 2025-07-17
Assignee
Inventors
Cpc classification
International classification
Abstract
A method that includes illuminating an eye with light at a first time and a second time and generating a first image of the eye based on the light that illuminates the eye at the first time. The method includes generating a second image of the eye based on the light that illuminates the eye at the second time. The method further includes positioning a laser source relative to the eye, wherein the laser source generates a therapeutic laser beam to be directed to the eye, wherein the first time is just prior to the therapeutic laser beam being directed to the eye and the second time is prior to the first time. The method further includes correcting orientation of the laser source relative to the eye based on a correlation function that is defined for the first and second images of the eye.
Claims
1-19. (canceled)
20. A method of aligning a coordinate system for a pre-treatment image of an eye and a treatment image of an eye, in a therapeutic laser system, the laser system comprising: a laser source; comprising a laser, laser optics and a coordinate system for directing the delivery of a therapeutic laser beam from the laser source to an eye of a patient; a processor; the processor configured to execute instructions stored in a memory; and, a control system; wherein the control system is in communication with the processor and the laser source; wherein the memory comprises a pre-treatment image of an iris of the eye of the patient and a treatment image of the iris of the eye of the patient; the method of aligning the coordinate system comprises the steps of: a) the laser system determining a correlation function for registration of the eye using a global correlation algorithm, wherein the correlation function is based upon the pre-treatment and treatment images; and, b) wherein the algorithm determines the correlation function, without singling out particular points in the pre-treatment and treatment images; c) aligning the coordinate system based at least in part upon the correlation function.
21. The method of claim 20, wherein the global correlation algorithm: a) detects a pupil-Iris boundary in both the pre-treatment and treatment images; b) filters and unwraps the iris images in both the pre-treatment and treatment images; c) converts both the pre-treatment and treatment unwrapped images from pixel representations to feature representations; d) measures a global correlation strength between the feature representations for a plurality of possible angles of cyclotorsion; and, e) identifies the angle of cyclotorsion that gives a strongest correlation.
22. The method of claim 20, wherein the eye in the pre-treatment image is dilated.
23. The method of claim 21, wherein the eye in the pre-treatment image is dilated.
24. The method of claim 20, wherein the eye in the treatment image is dilated.
25. The method of claim 21, wherein the eye in the treatment image is dilated.
26. The method of claim 20, wherein the eye in the treatment image is dilated and the eye in the pre-treatment image is dilated.
27. The method of claim 21, 23, or 25, wherein the pre-treatment image is of the eye when the patient was in an upright position.
28. The method of claim 21, 22, or 24, wherein the treatment image is of the eye when the patient was in a lying down position.
29. The method of claim 1, wherein the laser is a femto second laser.
30. A method of aligning a coordination system in a therapeutic laser system, wherein the therapeutic laser system comprises an analyzer; the method of aligning the coordination system comprising: a) unwrapping a pre-treatment image of eye and a treatment image of the eye, whereby the unwrapping of each image provides a pre-treatment feature representation of the eye and a treatment feature representation of the eye; b) the analyzer determining a change in an orientation of an iris of the eye between the pre-treatment image and the treatment image, based upon the pre-treatment feature representation and the treatment feature representation; without singling out particular points in the pre-treatment and treatment images; and, c) aligning the coordinate system based upon the change in the orientation of the iris of the eye.
31. The method of claim 30, wherein: a) the pre-treatment feature representation is rectangular having a top row representing a pupil boundary of the eye; and a bottom row representing a sclera boundary of the eye; b) wherein the treatment feature representation is rectangular having a top row representing the pupil boundary of the eye; and a bottom row representing the sclera boundary of the eye; and, c) wherein the dimensions of the rectangular images represent an angle and a distance from the pupal boundary;
32. The method of claim 30, wherein aligning the coordinate system comprises rotating the coordinate system.
33. The method of claim 31, wherein aligning the coordinate system comprises rotating the coordinate system.
34. The method of claim 30, 31 or 32, wherein the pre-treatment image of the eye is of a dilated pupil.
35. The method of claim 30, 31 or 33, wherein the treatment image of the eye is of a dilated pupil.
36. The method of claim 30 or 31, wherein the treatment image of the eye is of a dilated pupil and the pre-treatment image of the eye is of a dilated pupil.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
DESCRIPTION OF THE EMBODIMENTS
[0020] As schematically shown in
[0021] In communication with the laser source 102 and laser control system 104 is an analyzer 110. The analyzer 110 includes a light source 112 that illuminates the eye 108. One or more detectors or cameras 114 receive light reflected off the eye 108 and generate images of the eye 108. One image of the eye 108 is a pre-treatment image in that it is taken prior to the patient's eye 108 being subjected to the therapeutic laser beam 106. A second image of the eye 108 is a treatment image and is taken substantially at the time the eye 108 is treated by the therapeutic laser beam 106. The pretreatment and treatment images are stored in a recording medium, such as a memory 116, and are processed in a processor 118, which is in communication with the controller 104, memory 116 and light source 112. An example of an analyzer 110 that can be used is the Topcon CA-200F Corneal Analyzer manufactured by Topcon based in Japan.
[0022] The processor 118 executes instructions stored in the memory 116 so that an algorithm is performed in a very different approach from that used by existing algorithms. The algorithm proposed here is a global correlation algorithm, in which the registration is based on a correlation function that is defined for the pre-treatment and treatment images without singling out particular points in the iris. In operation, the eye 108 is imaged by the analyzer 110 prior to drug-induced dilation. Next, the eye 108 undergoes a laser procedure, such as cataract surgery, using the laser source 102 and laser control system 104. The basic steps/processes for the process or algorithm 200 are schematically shown in
[0029] In operation, the algorithm(s) related to processes 202-212 listed above are stored in the memory 116 as computer executable instructions, wherein the processor 118 executes the instructions so as to process the pre-treatment and treatment images so as to generate a signal that is able to correct the orientation of the therapeutic laser beam. Such signal is sent to the controller 104 which controls the optics 102 and laser source 103 so as to generate a correctly oriented laser beam 106.
Boundary Detection-Process 202
[0030] The easiest boundary to find is the pupil-iris boundary, as this boundary is extremely strong and the pupil itself is uniformly dark. An elliptical fit to the boundary is first found by approximating the center with a histogram method, performing a radial edge filter from this center on edges extracted with the standard canny algorithm, extracting up to 4 circles with a RANSAC algorithm, and combining matching circles together into an elliptical fit. An additional algorithm is used to fine-tune the result even further, which is basically a simplified implementation of Active Contours or Snakes. This algorithm takes as input the image and a previously found elliptical fit to the pupil boundary, and explores the image in the neighborhood of the boundary at several values of theta, finding the location that maximizes the radial component of the gradient of intensity values in the image for each theta. This builds a list of points that describe the boundary point by point in polar coordinates (with the origin remaining the center of the previously found ellipse). A simple Gaussian smoothing is then performed on this list of points to enforce continuity. The smoothed list of points is then taken to be pupil boundary.
[0031] To find the iris-sclera boundary in the diagnostic image of
[0032] To find the iris-sclera boundary in the treatment image, the ellipse describing the limbus in the diagnostic image is transferred to the treatment image by scaling the two radii of the ellipse according to the differing resolutions of the two cameras, assuming no cyclotorsion in placing the axis of the ellipse, and assuming that in the treatment image the limbus will be approximately concentric with the dilated pupil. This constitutes a good initial approximation, which is then improved upon by first using the same snakes algorithm that is used for the pupil boundary and then fitting an ellipse to the resulting set of points.
[0033] Often, images taken at a diagnostic device, such as analyzer 110, have some degree of eyelid or eyelash interference concealing a portion of the iris. To mask out these regions from consideration in the registration algorithm, eyelid/iris boundaries must be segmented in an image obtained from analyzer 110, such as shown in
[0034] If the threshold for minimum average intensity is met in either one of these three tests, the pixel remains white; otherwise the pixel is made black. Then, a circular mask is applied to mask out areas that are too close to the top and bottom borders of the image to be considered, and the classical erode algorithm is applied to thin out the eyelid/eyelash interference region as well as get rid of any lingering undesired edges, resulting in the image of
[0035] Finally, a bottom-up filter is applied to the upper eyelid region resulting in the image of
Filtering and Unwrapping the Iris-Process 204
[0036] The iris during dilation is approximated by a rubber sheet model, such that the iris in the non-dilated eye is assumed to basically be a stretched out version of the iris in the dilated eye. In this approximation, a pseudopolar mapping is carried out to unwrap the iris into a rectangular image in which the dimensions represent angle and distance from the inner (pupil) boundary. If the boundary detection is perfect, then the top row of this image will perfectly represent the pupil boundary and the bottom row will perfectly represent the sclera boundary. The size of the averaging area used to fill each pixel in the unwrapped image increases linearly as a function of distance from the pupil center. Obviously, there is technically information loss associated with this approach, with the amount of information loss increasing with distance from the pupil center. However, this loss does not have any noticeable impact, and in fact running registration algorithms on these images rather than the original images results in both a cleaner implementation and faster run time.
[0037] After unwrapping, the images are filtered with a Difference-Of-Gaussians (DOG) technique. This technique simply involves subtracting a severely blurred version of the image from a slightly blurred version of the image, which is in effect a band pass filter in the frequency domain. The result is increased signal strength of the iris fibers. An example of an unwrapped, DOG filtered iris is shown in
Feature ExtractionProcess 206
[0038] A feature vector is built for each unwrapped iris image, with the content of the feature vector being derived from gradient information from the structure tensor and Gabor filters. Thus, the components of the image feature vector are themselves local feature vectors with one Gabor filter component and one structure tensor component, and each of these two components are vectors themselves. Gabor filters are used by J. Daugman in his iris recognition algorithms (see http://www.cl.cam.ac.uk/jgd1000/csvt.pdf and U.S. and U.S. Pat. No. 5,291,560, the entire contents of which are incorporated herein by reference). The information extracted from the Gabor filter is a point in the complex plane which is computed by convolving a 2D Gabor wavelet with an area of the iris, according to the formula below:
Where , , and are wavelet size and frequency parameters, (r.sub.0,.sub.0) is the point about which the area of the iris being considered is centered, and I is the intensity value of the unwrapped iris image at a given point. In discrete form, this equation is applied as follows:
Where .sub., .sub.+, r.sub., and r.sub.+ denote the boundaries of the shell-like region over which the computation is done. For unwrapped images, becomes x, becomes y, and the region is rectangular rather than shell-like. This allows for a simple and computationally fast implementation, which is to set r.sub.0 and .sub.0 to zero and fill a 2D array with values according to the above equations with the image intensity values removed, for each the real part and imaginary part, and then convolve these 2D arrays with the images. This yields, at every pixel of each image, a 2D vector with components for the real and imaginary part of the result of centering a gabor filter on that pixel.
[0039] Similarly, the structure tensor is used to extract gradient information in the local neighborhood of each pixel. The entries in the 22 matrix representing the structure tensor are filled by averaging the derivative-based quantity over the entire neighborhood. Then, the eigenvalues and eigenvectors are extracted from the resulting matrix. The eigenvectors and eigenvalues give the dominant gradient direction and a measure of the strength of the gradient, respectively.
Measuring Correlation StrengthProcess 208
[0040] Consider the filtered, unwrapped image taken at the time of surgery I.sub.1 and the filtered, unwrapped image taken prior to surgery I.sub.2. Define an inner product for the structure part of the feature vectors of the two images given a particular hypothesized angle of cyclotorsion and a radial shear function .sub.(x) (to allow room for errors in boundary detection and the rubber sheet model approximation) as follows:
Similarly, we define an inner product for the Gabor wavelet part of the feature vectors as follows:
With
[0041] When doing this computation, w needs to be large enough to prevent .sub.(x) from being completely chaotic but not so large as to ruin the whole point of allowing a varying radial offset. For example, 10 has been observed to work well. Once the function .sub.(x) is computed for each , the inner products as defined in the equations at the beginning of this section can readily be computed.
[0042] A strong correlation corresponds to large values of both inner products. The domain of both inner products is [1, +1]thus, the net correlation is based on the average of the two inner products. Over a range of +18, a reasonable biological limit for cyclotorsion, the net correlation is computed from the average of the two inner products.
An example of correlation measurements as a function of proposed cyclotorsion angle is shown in
Extracting and Applying the Angle of CyclotorsionProcesses 210 and 212
[0043] The angle of cyclotorsion is the angle that produces the maximum correlation strength between the extracted features, which corresponds to the global maximum of the curve C in
[0044] Where r is the ratio between the global maximum and the next largest local maximum present in the correlation function after Gaussian smoothing. For example, in the smoothed correlation function for the plot in
[0045] In the left-handed natural coordinate system of the images, the cyclotorsion angle computed tells what value of angle in the topographer image was lined up with the zero angle in the treatment image. In the right-handed coordinate system (where counter clockwise corresponds to positive values of theta), this is equivalent to how the topographer image would be rotated to line up with the treatment image. This is the number needed, because treatment was planned in the frame of reference of the topographer image. Thus, for a cyclotorsion angle of 9.5 degrees, the compass defining the angular coordinate system on the laser system should be rotated by +9.5 degrees. This angle of rotation is calculated by processor 118 and conveyed to the controller 104 which rotates the laser beam 106.
[0046] As shown in
[0047] Some significant statistical validation is possible as well. One example method for doing this is to extend the allowable range of cyclotorsion angles all the way out to 180 to verify that the global maxima remains at the same location and examine how strong of a maxima it is relative to the correlation strengths of all other cyclotorsion angles. A couple of examples are shown in
[0048] Accounting for cyclotorsion is of paramount importance in any refractive surgery focused on correcting and/or compensating for astigmatism. An algorithm is proposed that should be able to solve this problem in cataract surgery either in the context of astigmatism-correcting incisions or assistance in toric IOL positioning. One exciting aspect of applying iris registration to cataract surgery is the prospect of releasing the first iris registration algorithm that can accommodate drug induced pupil dilation.
[0049] From the foregoing description, one skilled in the art can readily ascertain the essential characteristics of this invention, and without departing from the spirit and scope thereof, can make various changes and/or modifications of the invention to adapt it to various usages and conditions.