Systems and methods for ocular anterior segment tracking, alignment, and dewarping using optical coherence tomography
09833136 · 2017-12-05
Assignee
Inventors
Cpc classification
A61B3/107
HUMAN NECESSITIES
A61B3/0025
HUMAN NECESSITIES
A61B3/117
HUMAN NECESSITIES
G06T2207/10101
PHYSICS
International classification
A61B3/14
HUMAN NECESSITIES
A61B3/00
HUMAN NECESSITIES
A61B3/107
HUMAN NECESSITIES
A61B3/10
HUMAN NECESSITIES
A61B3/117
HUMAN NECESSITIES
Abstract
The present application discloses methods and systems to track the anterior segment while establishing a position of the delay which will permit good control of the placement of anterior segment structures. This allows accurate dewarping by maximizing the amount of corneal surface that is imaged as well as reducing or eliminating overlap between real and complex conjugate images present in frequency-domain optical coherence tomography. A method to dewarp surfaces given partial corneal surface information is also disclosed.
Claims
1. A method to dewarp image data of the anterior segment of an eye collected using an optical coherence tomographic instrument (OCT), said image data containing one or more observed fragments of one or more corneal surfaces, said method comprising: processing said image data to locate one or more observed fragments of a first corneal surface; determining an extended profile of said first corneal surface using the observed fragments of the first corneal surface and information related to the first corneal surface not contained in the image data; dewarping the image data based upon the determined extended profile of the first corneal surface; and, displaying to a user or storing the dewarped image data.
2. The method as recited in claim 1, further comprising: repeating the processing and determining steps for a second corneal surface, in which an extended profile for the second corneal surface is determined; and, using the extended profiles of both said first and second corneal surfaces to dewarp the image.
3. The method as recited in claim 1, in which the first corneal surface is either the anterior corneal surface or the posterior corneal surface.
4. The method as recited in claim 1, in which the processing step includes segmentation to locate said observed fragments; and, using the results of the segmentation to determine the extended profile of said first corneal surface.
5. The method as recited in claim 1, in which the extended profile for the first corneal surface is chosen from the group consisting of: quadratics, quadrics, higher-order polynomials, sigmoid functions, Zernike polynomials, conics, Jacobi polynomials, and aspherics.
6. The method as recited in claim 1, wherein the information related to the first corneal surface not contained in the image data that is used to create the extended profile is based on additional images of the cornea.
7. The method as recited in claim 1, in which the extended profile is determined by extrapolating information of the first corneal surface contained in said image data.
8. The method as recited in claim 1, wherein the information related to the first corneal surface not contained in the image data that is used to create the extended profile is based on a model of the first corneal surface.
9. The method as recited in claim 1, in which the method is automatically executed.
10. The method as recited in claim 1, further comprising: determining one or more geometric metrics from the dewarped image; and, reporting and/or storing said metrics.
11. The method as recited in claim 10, in which geometric metrics can be the length or diameter, angles, area, volume, thickness, or curvature of structures found within the anterior segment.
12. An optical coherence tomographic (OCT) system for imaging of an eye of a patient comprising: a light source for generating a beam of radiation; a beam divider for separating the beam of radiation into a sample arm and a reference arm; optics for scanning the beam in the sample arm transversely over the eye; a detector for measuring light radiation returning from the eye and reference arm and for generating output signals in response thereto; and, a processor for generating image data based on the output signals, said image data containing one or more observed portions of a corneal surface, said processor functions also to dewarp a portion of the image data, in which dewarping is performed based upon an extended profile of the corneal surface determined by using one or more observed portions of said corneal surface and information related to the corneal surface not contained in the image data.
13. The OCT system as recited in claim 12, wherein said processor functions also to determine one or more geometric metrics; and, stores and/or reports to a user said one or more geometric metrics.
14. The OCT system as recited in claim 12 in which the processor for generating an image and the processor for dewarping are distinct.
15. The OCT system as recited in claim 14, in which at least one of the processors is a parallel processor.
16. The OCT system as recited in claim 12 wherein the information related to the corneal surface not contained in the image data used to derive the extended profile is based on additional images of the cornea.
17. The OCT system as recited in claim 12 wherein the information related to the corneal surface not contained in the image data used to derive the extended profile is based on a model of the corneal surface.
18. A method to adjust automatically an optical coherence tomographic (OCT) system to optimize the locations of structures found in the anterior segment of an eye of a patient, said OCT system having a sample arm and a reference arm, the relative positions thereof defining a delay position, comprising: obtaining OCT image data at an initial delay position; processing said OCT image data to identify a set of structures detected therein; ascertaining the location of one or more corneal surfaces using one or more structures in the set; identifying an amount of overlap of the one or more corneal surfaces with other structures in the set; and, adjusting the delay position to reduce the amount of overlap in said OCT image data.
19. The method as recited in claim 18, in which the amount of overlap is determined by statistical metrics.
20. The method as recited in claim 18, in which the amount of overlap is determined by statistical analyses of a watershed line by one or more connectivity metrics.
21. The method as recited in claim 18, in which the B-scan contains data on both sides of the zero-delay position.
22. The method as recited in claim 18, in which the structures include corneal anterior and posterior surfaces and mirror images of the iris and/or the crystalline lens.
23. The method as recited in claim 18, in which the adjustment of the OCT system to reduce or eliminate said overlap occurs in real-time.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
DETAILED DESCRIPTION
(35) A generalized Fourier Domain optical coherence tomography (FD-OCT) system used to collect an OCT dataset suitable for use with the present set of embodiments is illustrated in
(36) Light from source (201) is routed, typically by optical fiber (205), to illuminate the sample (210), which could be any of the tissues or structures with an eye. The light is scanned, typically with a scanner (207) between the output of the fiber and the sample, so that the beam of light (dashed line 208) is scanned over the area or volume to be imaged. Light scattered from the sample is collected, typically into the same fiber (205) used to route the light for illumination. Reference light derived from the same source (201) travels a separate path, in this case involving fiber (203) and retro-reflector (204). Those skilled in the art recognize that a transmissive reference path or arm can also be used. The delay difference between the reference and sample arms determines what axial locations of the sample is imaged. The delay difference can be controlled by adjusting a delay line in either the reference or sample arms of the system or changing the location of the patient relative to the instrument along the direction of the light path. Delay adjustment as used herein refers to any adjustment that alters the optical path length difference between sample and reference arms. Collected sample light is combined with reference light, typically in a fiber coupler (202), to form light interference in a detector (220) and said detector generating signals in response to the interfering light. The output from the detector is supplied to one or more processors (221). The results can be stored or further processed in one or more processors and/or displayed on display (222).
(37) The processing and storing functions may be localized within the OCT instrument or functions may be performed on an external processing unit to which the collected data is transferred. This unit could be dedicated to data processing or perform other tasks which are quite general and not dedicated to the OCT device. The display can also provide a user interface for the instrument operator to control the collection and analysis of the data. The interface could contain knobs, buttons, sliders, touch screen elements or other data input devices as would be well known to someone skilled in the art. One or more of the processors can be of the parallel processing type such as GPUs, FPGAs, or multi-core processors. As
(38) The interference between the light returning from the sample and from the reference arm causes the intensity of the interfered light to vary across the spectrum. The Fourier transform of the interference light reveals the profile of scattering intensities at different path lengths, and therefore scattering as a function of depth (z-direction) in the sample. The scattering profile as a function of depth is called an axial scan (A-scan). A set of A-scans measured at neighboring locations in the sample produces a cross-sectional image (tomogram or B-scan) of the sample. A collection of B-scans makes up a data cube or volume. It should be noted, however, that the application of these methods need not be limited to data acquired via FD-OCT; they could also be applied to data acquired via other OCT variants including TD-OCT and could be applied to parallel OCT techniques such as line field, partial field and full field as well as traditional point scanning systems.
(39) The sample and reference arms in the interferometer could consist of bulk-optics, fiber-optics or hybrid bulk-optic systems and could have different architectures such as Michelson, Mach-Zehnder, or common-path based designs as would be known by those skilled in the art. Light beam as used herein should be interpreted as any carefully directed light path. In time-domain systems, the reference arm needs to have a tunable optical delay to generate interference. Balanced detection systems are typically used in TD-OCT and SS-OCT systems, while spectrometers are typically used at the detection port for SD-OCT systems. The embodiments described herein could be applied to any type of OCT system that uses an inverse Fourier transform.
(40) Optimum Placement of Structures within OCT Anterior Segment Images
(41) An automated alignment and acquisition of OCT anterior segment imaging would allow fast alignment of the OCT system in the presence of unwanted artifacts due to overlap of complex-conjugate images with real images.
(42) A generalized embodiment to achieve optimum placement of structures within OCT anterior segment images imaging technique is summarized in
(43) A preferred embodiment is presented in
(44) To enable the identification of structures within the image data, edge detection (302) is then performed on the B-scan image or its reduced-size version. This procedure ultimately results in an edge image where a pixel having a “1” value represents an edge (see, e.g., Canny 1986). A gradient image is converted into an edge image by applying a threshold. Any pixel value greater or equal to a pre-defined value is given the value of one. Any pixel value not satisfying this requirement is given the value of zero. The skilled person in the art will readily recognize that determining threshold values is a standard art (see, e.g., Parker 1997).
(45) While the Canny edge detection is the preferred algorithm, other approaches, with subsequent pixel thresholding/binarization, would be tractable as well. The Canny algorithm converts the initial intensity image into a gradient image by the use of some derivative function such as that of a derivative Gaussian. Canny edge detection produces an edge image that contains most likely all the desired anterior surface edges, as well as undesired edges from other surfaces. Besides the use of this functional characterization, optional functional forms that could be convolved with the intensity data in the axial dimension to create a gradient image are Prewitt or Sobel operators, Laplacian, Kirsch compass, Marr-Hildreth, difference of Gaussians, Laplacian of Gaussians, higher-order Gaussian derivatives, Roberts cross, Scharr operator, Ricker wavelet, Frei-Chen or any discrete differentiation operator well known to the ordinary skilled person in the art.
(46) Additional approaches can use multi-scale techniques such as Log-Gabor wavelets and phase congruency (Kovesi 2003) to generate gradient images and extract image features to aid in the identification of structures within the image. Phase congruency is highly localized and invariant to image contrast which leads to reliable image feature detection under varying contrast and scale. The method of phase congruency applies to features with small derivative or smooth step where other methods have failed.
(47) Within the edge image, there are gaps between neighboring segments (which consist of several adjacent points). These gaps can be connected (303) by searching at the end of such a segment, within a specified radius, to locate any points, or other segment endpoints that lie within said radius. A problem exists in that the origins of these edges are not readily identifiable and thus cannot be used without further processing. The edge information is used to estimate the initial positions of the anterior surface (for example in the application of step 305 in
(48) Once the gaps have been filled, quadratic functions (or any other even function that fits well a 2D-slice through the corneal structure, which is naturally very close in shape to a conic curve) are then robustly fitted to identified connected edges (304). The number of quadratic functions that are fitted depends on the number of connected edges found in the image or sub-image or region-of-interest (ROI). This number may be significantly more than the anatomical edges found in the sample because many of the “edges” identified by Canny edge detection (or other edge detection algorithms) may be due to noise or due to the mirror images.
(49) The fitting function with the simplest form, that of a quadratic function, is a parabola (y=ax.sup.2+bx+c). This quadratic function's parameters can be determined using Random Sample Consensus (RANSAC) fitting (see Fischler & Bolles 1981). From the quadratic functions fitted to the connected edges (see 601 of
(50) In
(51) The coordinates of the connected edges (in the edge image) associated with the quadratic function represent the anterior (or posterior) surface (inliers detected by RANSAC). Any discontinuities in the anterior surface can be removed by using a fast edge-linking algorithm (for example, in the application of step 303 in
(52) Statistical metrics of a small region above the anterior surface can be derived and compared to background statistical metrics of a similar nature (307) and this comparison (308) indicates whether the central cornea intersects with the mirror of the iris or lens surface. Statistical metrics could include statistical dispersion and/or moments such as mode, mean, median, skewness, kurtosis, of distributions of pixel values in desired sub-images. Other statistical analyses to identify a region or sub-image with an excess of intensities above the background would be readily apparent to the skilled person in the art.
(53) A decision (308) is then possible as to whether the initial estimated value of the position of the delay is sufficiently accurate for the purposes at hand or is a more refined value required. If overlap persists, then the delay-line position is readjusted (309), and the method repeated.
(54) For greater precision in determining the optimum position of the delay there exists an extension to the method of
(55) The Hausdorff distance (h) measures how far two subsets of a metric space are from one another. Two sets are close in the Hausdorff distance if every point of either set is closer to some point of the other set. Note that zero-delay points are considered as those points within the lateral endpoints of the anterior surface. The objective function based on the Hausdorff distance defined as follows
max[h(A,B)], where h(A,B)>0, and,
h(A,B)=max{min[abs(a−b)]}, where aεA,bεB, (Eq. 1)
and set A represents the set of point of the anterior surface; and the set B represents the points of the zero-delay positions in the image.
(56) While this embodiment has used the Hausdorff distance, any objective function appropriate to defining non-overlapping real and complex-conjugate images could be used. Such functions are standard in optimization techniques. The determination of the extrema (minimum or maximum) of such functions leads to the optimum solution. An objective function may have a variety of components (vs just one) in which it can find the optimum position while compromising potential conflicts between the components. One such function (or metric) might be a weighted sum or a product of the cumulative values within certain pre-determined areas. In this case, the dependent variable used in steps 350 and 351 of
(57) In an alternative aspect of the present application, if the shortest distance (=d.sub.sh) between the vertex position of the anterior surface and the zero-delay is smaller than a threshold, then there will be unwanted image overlap. The delay position will then need to be adjusted accordingly and the process repeated until the threshold distance has been met (meaning a clean separation between mirror and real images). Techniques for determining a threshold value are well-known to the skilled person in the art (see, e.g., Parker 1997). Alternatively, the threshold may be known in advance for a particular instrument during calibration.
(58) Another method of characterizing overlap of real and complex conjugate images lies in deriving a watershed line or function. A grey-level image (an OCT B-scan, for example) may be seen as a topographic relief, where the grey level of a pixel is interpreted as its altitude in the relief. A drop of water falling on a topographic relief flows along a path to finally reach a local minimum. Intuitively, the watershed of a relief corresponds to the limits of the adjacent catchment basins of the drops of water. In image processing, different types of watershed lines may be computed. Watersheds may also be defined in the continuous domain, which is in the context used in the present application. There are many different algorithms to compute watersheds. Watershed algorithms are often used in image processing primarily for segmentation purposes. A watershed line is obtained by performing a distance transform on a binarized image. A common distance transform is Euclidean, although other transforms are also applicable to this problem, as would readily be recognized by the ordinary skilled person in that art.
(59) In the current application, a watershed line will possess continuous connectivity if there is no intersection or overlap of a mirror image with that of a real image. These two images, real and mirror, are the catchment basins referred to above. If there is overlap between the real and mirror images, then the watershed line will not be continuous, and will not exist in the regions of overlap. (Additional information on watershed function or line, may be found in Meyer 1994; or any standard text on morphological image processing, such as Russ 2011.) The watershed is derived from a distance-transform image and the length of the watershed (or the connectivity of local maxima) in the region above the anterior corneal surface can identify an overlap.
(60)
(61) Also displayed on this
(62) A region of interest (ROI) is then extracted (1003) from the distance-transform image, where the ROI can be defined as the upper half of the original image, shown as the box (804a) in
(63) In contrast, a single long path indicates that no intersection occurred between the laterally central or upper corneal surfaces and the mirror of the iris or lens surface. Iterating on this procedure (1005) can then determine an optimal value for the delay position (Δ) by re-adjusting Δ (1006) and repeating the steps.
(64) A connectivity metric can be defined which could be any of the following or any weighted combination thereof: sum, mean, or a distribution of the values in the distance transform image along the watershed line. The shape of this distribution, quantifiable by standard analyses of distributions (kurtosis, skewness, mean, median, mode, various moments as well) would identify the ‘connectivity’ of the watershed line. Also a plot of the position of the local maxima as a function of the lateral coordinate (within a range of lateral coordinates) would identify gaps in the distribution in the case of a lack of connectivity and would be a continuous distribution within the given range of lateral coordinates. The range would be derived in advance during a calibration phase of the instrument.
(65) Thus in the aforementioned, two image quality metrics have been defined: statistical metrics and connectivity metrics. These can be used to determine optimal positioning of the delay. The calculated adjustment in the delay position can be performed automatically or reported to the operator who then can implement the change manually.
(66) Optimal corneal positioning can be accomplished when there is no overlap between the mirror images of the iris (or crystalline lens).
(67) Another improvement in the amount of corneal surface imaged can be accomplished by decreasing the distance between the anterior corneal surface and the nearest surface of the mirror iris image, to a value that is not below a certain minimum or threshold (the second distance metric). This distance determination can be based upon the afore-disclosed embodiments. For example, converting
(68) Dewarping
(69) OCT has become an important tool for assessment of pathologies or irregularities of structures found within the anterior segment or chamber. However, measuring an angle or any geometric structure in anterior segment OCT images before corrections for beam geometry or refraction leads to inaccurate values. To correct for refraction error, one needs to detect the anterior and/or posterior corneal surfaces in the image. Often these surfaces are partially captured in a scan's field-of-view as shown in the
(70)
(71) In
(72) If it is desired to measure accurately the iridocorneal angle, then, at a minimum, the measurement points need to be correctly dewarped. The segmentation of anterior and/or posterior surfaces of the cornea are used as refractive surfaces to dewarp the acquired image, as disclosed in the present application. If the angle was placed near the middle or upper portion of the image, the extent of either or both of the anterior or posterior corneal surfaces is likely to be insufficient for de-warping purposes.
(73)
(74) The image in
(75) The measured angle (1401) in the undewarped (
(76) In order to overcome the aforementioned problems such as reducing or eliminating artifacts associated with standard dewarping, two embodiments are outlined in
(77) A basic embodiment of this dewarping approach is presented in
(78) In an exemplary embodiment shown in
(79) The next step (1653) of the method is to locate the initial anterior corneal surface points in the OCT image in
(80) The segmented surface points determined in step 1655 along with a priori information about the corneal surface can be used to accurately determine an extended profile of the corneal surface. For instance, the corneal surface profile or model and curvature information can be extracted from existing central corneal scans of the same subject. This information can help to constrain and construct an extrapolating function.
(81) A profile that is used in the preferred embodiment is to fit a flexible sigmoid function (1656) to the segmentation data available in the image. This can be considered as an extrapolation profile or model of the corneal surface, to the anterior segmentation data in the physical domain according to:
(82)
(83) The parameters of this function are constrained using prior knowledge about the extended corneal surface profile or model, if either one is available.
(84) After the fit, the anterior segmentation data and the fit are combined or stitched together (1657) to create a surface that represents the anterior corneal surface. There is the option to smooth the combined profile using B-spline interpolation. The resultant surface can then be used to dewarp (1658) the angle image or other nearby structures. After this step, reliable biometric measurements (1659) can be performed. An exemplary geometric metric is the irido-corneal angle. While the above discussed embodiments emphasized the anterior corneal surface, the method is equally applicable to the posterior surface. The most preferred corneal surface to fit is that of the anterior surface, as it provides the most accurate dewarping. The next most preferred surface is that of the posterior one. A third alternative exists to use both corneal surfaces in the dewarping procedure, as will be discussed in further detail below.
(85) There are other functions that can be used in extrapolations, for example, a quadratic function:
ax.sup.2+bxz+cz.sup.2+dx+ez+ƒ=0.
(86) The parameters can be constrained to construct the corneal surface for dewarping. Other functional forms can be quadratics, quadrics, conics, sigmoid functions, aspherics (Forbes 2007), higher-order polynomicals, and Zernike or Jacobi polynomials.
(87) The image depicted in
(88) In the case with wide-angle anterior segment imaging, where the image possesses diagonally opposite irido-corneal angles, a similar processing technique is also applicable.
(89) Dewarping using both corneal surfaces can proceed by two different methods. In the first method, a sequential approach, the image is first dewarped using the anterior surface and everything below that surface is dewarped, including the posterior surface. The next step is then to use the dewarped posterior surface, to perform a second dewarping of the image found below that posterior surface. The same algorithm can be used for each of these two dewarpings, but only the input (i.e., which corneal surface) to the algorithm changes.
(90) The second method is to dewarp, point-by-point, using a ray-trace approach, correcting each ray for the refractive properties of the cornea and its surfaces. This second method is rather time consuming and is one that is usually not adopted.
(91) It should be noted that in order to achieve proper dewarping, segmentation of the layer interfaces is imperative. Additional information regarding segmentation and dewarping techniques may be found in US2013208240.
(92) Accurate dewarping can permit additional geometric measurements or metrics to be performed reliably. These include geometric metrics associated with the anterior chamber, such as width, volume, area, and diameter (Radhakrishnan & Yarovoy 2014; Nongpiur et al. 2010; Narayanaswamy et al. 2011; Wu et al. 2011). Geometric metrics associated with the iris include: area, thickness, curvature, and volume (Quigley 2009). Also measureable is the lens vault, which is defined as the perpendicular distance between the anterior pole of the lens and the horizontal line connecting the two scleral spurs (Nongpiur et al. 2011).
(93) Tracking Applications Using Anterior Segment Structures
(94) Tracking of eye movements is often accomplished by using a secondary (often non-OCT) imaging modality based upon a reference feature, structure, or mark seen within the eye. When imaging the retina, reference marks can be such anatomical features as the optical nerve head, blood vessels, or the fovea. When imaging the anterior segment, and in particular, when there is limited field-of-view of the structures within the anterior segment, the iridocorneal angle provides a well-defined geometric structure that can be used as a reference mark. Another geometric structure that could be used as a reference mark is the rear surface of the iris. In this embodiment, OCT images can be acquired and aligned in such a way that the iridocorneal angle can be captured with desired coordinates in the image domain. This is important for the following reasons: visibility of other geometric structures in the anterior segment; an efficient workflow; and accurate measurements. In the case of enhanced depth imaging mode (Spaide et al. 2008; U.S. Pat. No. 8,605,287), it is desired to place the angle close to the zero-delay which is at the bottom of the image. This produces better signal and visibility of structures such as the corneal surfaces which are required for dewarping.
(95) Historically, manual alignment of the delay position is time consuming and may not be successful with a limited field-of-view. The smallest eye motion in axial/longitudinal or lateral directions can easily move the angle from the desired location. Thus a real-time or fast automatic method to track and align is desirable. Plus with the implementation of tracking/accurate aligning in the anterior segment, precise positioning control of the placement of ROIs within an image becomes viable.
(96)
(97)
(98)
min ƒ(x,y),xε[1,w],yε[1,h],
where w and h are the image width and height, respectively.
(99) In
(100) The execution time of the algorithm can be accelerated by isolating and processing an ROI (see 3301 in
(101) An alternative approach to that disclosed hereinabove, is to use a template for the iridocorneal angle within a matching or cross-correlation method. Once the angle position for a given image has been detected, an ROI centered at that angle position can be extracted from the input image as a template. This template will then be input to a template matching for the subsequently obtained images. The template can be updated during alignment and/or tracking using the current (or present) image. The template matching function that is maximized could be a 2-D normalized cross-correlation function or similar, as would be readily recognized by the ordinary skilled person in the art. Although various applications and embodiments that incorporate the teachings of the present application have been shown and described in detail herein, those skilled in the art can readily devise other varied embodiments that still incorporate these teachings.
REFERENCES
Patent Literature
(102) US20130208240 US20130188140 US20130208240 US20120249956 US20120271288 U.S. Pat. No. 8,414,564 U.S. Pat. No. 7,884,945 U.S. Pat. No. 8,605,287 WO2014021782
Non-Patent Literature
(103) Canny 1986, IEEE Trans Patt Anal Mach Intell, PAMI-8(6), 679-98. Fischler & Bolles 1981, Comm ACM 24(6), 381-95. Hofer et al. 2002, Opt Expr 18(5), 4898-919. Westphal et al. 2002, Opt Expr 20(9), 397-404. Wojtkowski et al. 2002, Opt Lett 27(16), 1415-17. Zhang et al. 2005, Opt Lett 30(2), 147-49. Kovesi 2003, Proc. DICTA, 309-318. Zhang et al. 2004, Opt Exp 12, 6033-6039. Van Herick et al. 1969, Am J Ophthal 68, 626-629. Smith 1979, Br J Ophthal 63, 215-220. Friedman & He 2008, Surv Ophthal 53, 250-273. Westphal et al. 2002, Opt Exp 10, 397-404. Ishikawa & Schuman 2004, Ophthal Clin North Am 7, 7-20. Kai-shun et al. 2008, Invest Ophthal Vis Sci 49, 3469-3474. Wirbelauer et al. 2005, Arch Ophthal 123, 179-185. Ortiz et al. 2010, Opt Exp 18, 2782-2796. Ortiz et al. 2009, App Opt 48, 6708-6715. Radhakrishnan et al. 2007, Invest Ophthal Vis Sci, 48, 3683-3688. Wu et al. 2011, Arch Ophthalmol. 129(5), 569-574. Narayanaswamy et al. 2011, Arch Ophthalmol. 129(4), 429-434. Nongpiur et al. 2010, Ophthalmol. 117, 1967-1973. Radhakrishnan & Yarovoy 2014, Curr Opin in Ophthal 25, 98-103. Quigley 2009, Am J Ophthalmol 148, 657-669. Quigley et al. 2009, J Glaucoma 18, 173-179. Spaide et al. 2008, Am J Ophthalmol 146, 496-500. Forbes 2007, Opt Exp 15, 5218-5226. Wang 2007, App Phys Lett 90, 054103 Baumann et al. 2007, Opt. Expr 15, 13375-13387 Maurer et al. 2003, IEEE Trans Patt Anal Mach Intell 25, 265-270. Rosenfeld & Pfaltz 1966, J Ass Comp Mach 13, 471-494. Paglieroni 1992, Comp Vis Graph Image Proc 54, 57-58. Meyer 1994, Sig Proc 38, 113-125.
Books
(104) Parker 1997, Algorithms for Image Processing and Computer Vision, Wiley: New York, ISBN: 047114056-2. Umbaugh 2010, Digital Image Processing and Analysis, 2.sup.nd Edition, ISBN: 978-1439802052, CRC Press. Rucklidge, W. 1996, Ph.D. thesis, in Lecture Notes in Computer Science (Book 1173), ISBN 9783540619932, Springer Verlag. Russ 2011, The Image Processing Handbook, 6.sup.th edition, ISBN:978-1439840450.