REDUCING SPECKLE NOISE IN OPTICAL COHERENCE TOMOGRAPHY IMAGES

20170131082 ยท 2017-05-11

    Inventors

    Cpc classification

    International classification

    Abstract

    A method and system are proposed to obtain a reduced speckle noise image of a subject from optical coherence tomography (OCT) image data of the subject. The cross sectional images each comprise a plurality of scan lines obtained by measuring the time delay of light reflected, in a depth direction, from optical interfaces within the subject. The method comprises two aligning steps. First the cross sectional images are aligned, then image patches of the aligned cross sectional images are aligned to form a set of aligned patches. An image matrix is then formed from the aligned patches; and matrix completion is applied to the image matrix to obtain a reduced speckle noise image of the subject.

    Claims

    1. A computer implemented method of processing optical coherence tomography (OCT) image data, the OCT image data comprising a plurality of cross sectional images of a subject, the cross sectional images each comprising a plurality of scan lines obtained by measuring the time delay of light reflected, in a depth direction, from optical interfaces within the subject, the method comprising aligning the cross sectional images by determining relative translations for each cross sectional image in the depth direction and in a lateral direction, perpendicular to the depth direction, to form a set of aligned cross sectional images; aligning image patches of the aligned cross sectional images, each image patch comprising at least one scan line, by determining relative translations for each patch in the depth direction to form a set of aligned patches; forming an image matrix from the aligned patches; and applying matrix completion to the image matrix to obtain a reduced speckle noise image of the subject.

    2. The method according to claim 1, wherein applying matrix completion to the image matrix comprises decomposing the image matrix into a matrix representing a clean image part and a matrix representing a noise part.

    3. The method according to claim 1, wherein applying matrix completion to the image matrix comprises decomposing the image matrix into a matrix representing a clean image part, a matrix representing a sparse part and a matrix representing a noise part.

    4. The method according to claim 2, wherein the matrix representing the clean image part is a low rank matrix.

    5. The method according to claim 1, further comprising applying a preliminary de-noising process to the cross sectional images prior to aligning the cross sectional images.

    6. The method according to claim 5, wherein the preliminary de-noising process is a speckle reduction anisotropic diffusion algorithm.

    7. The method according to claim 1 further comprising applying a filter to the reduced speckle noise image of the subject.

    8. The method according to claim 7, wherein the filter is an adaptive Weiner filter.

    9. The method according to claim 1 wherein aligning the cross sectional images comprises applying a block matching algorithm.

    10. The method according to claim 1 wherein aligning image patches of the aligned cross sectional images comprises applying a block matching algorithm between image patches of different cross sectional images.

    11. The method according to claim 9 wherein the block matching algorithm is the diamond search algorithm.

    12. The method according to claim 1 further comprising applying a smoothing process to the relative translations in the depth direction for each slice in the same cross sectional image to form the set of aligned patches.

    13. A computer system for processing optical coherence tomography (OCT) image data, the OCT image data comprising a plurality of cross sectional images of a subject, the cross sectional images each comprising a plurality of scan lines obtained by measuring the time delay of light reflected, in a depth direction, from optical interfaces within the subject, the computer system having at least one processor and a data storage device storing program instructions, the program instructions being operative upon being run by the processor to cause the processor to: align the cross sectional images by determining relative translations for each cross sectional image in the depth direction and in a lateral direction, perpendicular to the depth direction, to form a set of aligned cross sectional images; align image patches of the aligned cross sectional images, each image patch comprising at least one scan line, by determining relative translations for each patch in the depth direction to form a set of aligned patches; form an image matrix from the aligned patches; and apply matrix completion to the image matrix to obtain a reduced speckle noise image of the subject.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0020] In the following an embodiment of the present invention is described by way of example only with reference to the drawings, in which:

    [0021] FIG. 1 is a flow chart illustrating a method of processing optical coherence tomography (OCT) data according to an embodiment of the present invention.

    [0022] FIG. 2 shows an example of the preliminary de-noising process in an embodiment;

    [0023] FIG. 3, which is composed of FIGS. 3a and 3b, illustrates the image registration steps; FIG. 3a illustrates a global registration and FIG. 3b illustrates a local registration;

    [0024] FIG. 4 shows an example of image recovery according a method in which an original image X is decomposed into the recovered image L and the noise N;

    [0025] FIG. 5 shows an example of image recovery according a method in which an original image X is decomposed into the recovered image L a sparse part S and the noise N;

    [0026] FIG. 6, which is composed of FIGS. 6a and 6b, shows a plot of the contrast to noise ratio (CNR) for the methods illustrated in FIGS. 4 and 5 with different numbers of slices; FIG. 6a shows results obtained using the Image J noise model; and FIG. 6b shows results obtained using the Motion noise model.

    DETAILED DESCRIPTION OF THE EMBODIMENT

    [0027] FIG. 1 is a flow chart illustrating a method of processing optical coherence tomography (OCT) data according to an embodiment of the present invention. The method may be performed by a computer system such as a standard generally programmed computer, having a data storage device storing program instructions to implement the method steps.

    [0028] The input to the method is OCT data comprising a plurality of OCT slices. A requirement to use matrix completion is to obtain multiple slices for each scan. This is easy for line scan mode as each slice is scanned 96 times. For other modes such as 5 line-cross mode, 3D vertical and 3D horizontal mode, the scan from the neighbouring slice can be used as the neighbouring slices have strong similarity.

    [0029] For example, in 5 lines cross mode, we use all the horizontal/vertical scans (532=160) for a horizontal/vertical slice. For 3D scans, we can use all 256 scans. Although many of these scans are obtained from different locations, many of the A-scans from the slices are similar. Therefore, we might make use of this similarity for noise reduction.

    [0030] For simplicity, we assume that the input has K slices of OCT scans, including both the repeated scans from the same location and other scans from nearby locations. Each slice has pq pixels.

    [0031] In step 1 of the method, preliminary noise reduction is performed. Because of the inevitable movement when capturing the image slices, the original image slices may not match well. Therefore, it is necessary to align them to minimize the error. The original OCT slices are likely to be corrupted by large speckle noise.

    [0032] Applying an image matching algorithm directly on these OCT slices may result in an unpredictable outcome due to the speckle noise. A pre-processing step is used to reduce the noise. Many algorithms can be used. In this embodiment we use the speckle reduction anisotropic diffusion algorithm.

    [0033] The anisotropic diffusion algorithm is implemented as follows. Given an intensity image, the output image is evolved according to the partial differential equation (PDE):

    [00001] { I t = div [ c ( q ) .Math. .Math. I ] I ( t = 0 ) = I 0 ,

    [0034] where c(q) denotes the diffusion coefficient computed with:


    c(q)=1/(1+q.sup.2(x,y;t)q.sub.0.sup.2(t)/q.sub.0.sup.2(t)(1+q.sub.0.sup.2(t))) or


    c(q)=exp{q.sup.2(x,y;t)q.sub.0.sup.2(t)/q.sub.0.sup.2(t)(1+q.sub.0.sup.2(t))} and

    [0035] FIG. 2 shows an example of the preliminary de-noising process in an embodiment. Three input slices shown on the left hand side of FIG. 3 are processed using the speckle reduction anisotropic diffusion algorithm to provide the de-noised slices shown on the right hand side of FIG. 2.

    [0036] Following the preliminary noise reduction, image registration between slices is carried out. The image registration between two slices (B-scans) is done by a global registration in step 2 followed by a local registration in step 3. Many methods can be used in the global registration.

    [0037] FIG. 3 illustrates the image registration steps. FIG. 3a illustrates the global registration which is carried out in step 2 and FIG. 3b illustrates the local registration which is carried out in step 3.

    [0038] In our implementation, the global registration is done by a global alignment, i.e., the entire slice (B-scan) is used in the matching by a translation in both horizontal (the y-direction) and vertical direction (the x-direction). The x-axis represents the direction in this the A-scans are carried out, that is a depth direction into the eye or subject being imaged. The y-axis represents a lateral direction, in which the A-scans are combined to form B-scans, perpendicular to the depth direction.

    [0039] As shown in FIG. 3a, each slice B.sub.j is first translated by (x.sub.j,y.sub.j) in the global registration. This is carried out according to the following process. Take an image I.sub.i as a reference, for each j, j=1, 2, . . . K and ji find the translation (x.sub.j,y.sub.j) between I.sub.i and I.sub.j such that their error is minimized. Therefore, we obtain a set of globally registered B-scan slices B.sub.i, i=1, 2, . . . K.

    [0040] In the local registration, which is shown in FIG. 3b, an A-scan line or a group of neighbouring A-scan lines from one B-scan slice are translated vertically for best matching in the corresponding A-scans in the another B-scan slice.

    [0041] This is carried out according to the following process. Divide B.sub.i to non-overlapping patches A.sub.i,k, k=1, 2, . . . P, where P is the number of patches obtained. Each patch has l columns or A-scan lines. For two corresponding patches A.sub.i,k and A.sub.j,k from two B-scans B.sub.i and B.sub.j, find the vertical translation x.sub.j,k between them such that their error is minimized.

    [0042] In order to avoid large error, patches/lines with poor matching can be discarded. Lines with poor matching may be identified and discarded by comparing the error with a threshold and discarding patches/lines with an error larger than the threshold. We apply vertical matching because each vertical line is an outcome of an A-scan and the movement within one A-scan is ignored.

    [0043] Patient eye movements may be more obvious between different A-scans as time goes. Since the eye movement is expected to be smooth, a smoothing process is applied to the vertical translations x.sub.j,k, k=1, 2, . . . for lines in the same slice. It should be noted that the above patch is a line or an A-scan if we set l=1. Including multiple scan lines in the same patch helps to obtain a robust vertical translation. In another words, if we align one scan line with another line, the result may be more sensitive to the noise.

    [0044] Many block matching algorithms can be used to find the alignment. In our implementation, we use the diamond search strategy for both global and local alignments because of its efficiency and easy implementation.

    [0045] Once a set of aligned patches has been obtained in step 3, image recovery is carried out in step 4. After the image alignment, a set of aligned patches are obtained based on the global translation and the smoothed local translation. In order to recover an image, each aligned patch is vectorised to a row and stacked to form a matrix X with m rows. Each row in X has p.Math.l elements from each vectorized patch. Each row of the matrix X has p.Math.l elements and the matrix X has the number of B-slices of rows.

    [0046] Once the image matrix, has been constructed, two possible methods of determining a reduced speckle noise image are possible.

    [0047] Method 1: Decompose the X as a sum of low rank part and noise part, i.e.,


    X=L+N,(1)

    [0048] Where L is the low rank part with rank (L)r, and N is the noise. In an implementation r is set to 2.

    [0049] The above decomposition is solved by minimizing the decomposition error:

    [00002] min L .Math. .Math. X - L .Math. F 2 , s . t . .Math. rank ( L ) r ( 2 )

    [0050] The optimization problem above is solved by the bilateral random projections (BRP) algorithm.

    [0051] The algorithm is summarized as follows:

    TABLE-US-00001 Algorithm 1 for solving L Generate a random matrix A.sub.1 A.sub.1 R.sup.mr ; Compute {tilde over (L)} = (XX.sup.T).sup.q X; Y.sub.1 = {tilde over (L)}A.sub.1,A.sub.2 = Y.sub.1; Y.sub.2 = {tilde over (L)}.sup.TY.sub.1,Y.sub.1 = {tilde over (L)}Y.sub.2 Calculate QR decomposition Y.sub.1 = Q.sub.1R.sub.1 , Y.sub.2 = Q.sub.2R.sub.2 ; L = Q.sub.1[R.sub.1(A.sub.2.sup.TY.sub.1).sup.1 R.sub.2.sup.T].sup.1/(2q+1) Q.sub.2.sup.T ;

    [0052] Method 2: In the second method, we decompose the X as a sum of low rank part, sparse part and noise part, i.e.,


    X=L+S+N,(1)

    [0053] Where L is the low rank part with rank (L)r, S is the sparse part with card(S)k, k is a threshold which in one implementation is set as 30% of total elements in the matrix S, and N is the noise.

    [0054] The above decomposition is solved by minimizing the decomposition error:

    [00003] min L , S .Math. .Math. X - L - S .Math. F 2 , s . t . .Math. rank ( L ) r , card ( S ) k . ( 2 )

    [0055] The optimization problem above is solved by alternatively solving the following two sub-problems until convergence.

    [00004] { L t = arg .Math. .Math. min rank ( L ) r .Math. .Math. X - L t - 1 - S t - 1 .Math. F 2 S t = arg .Math. .Math. min card ( S ) k .Math. .Math. X - L t - 1 - S t - 1 .Math. F 2

    [0056] The algorithm is summarized as follows:

    TABLE-US-00002 Algorithm 2 for solving L Initialize t := 0; L.sub.0 = X; S.sub.0 = 0; While X L.sub.t S.sub.t.sub.F.sup.2 /X.sub.F.sup.2 > , do t := t + 1; {tilde over (L)} = [(X S.sub.t1)(X S.sub.t1).sup.T].sup.q (X S.sub.t1); Generate a random matrix A.sub.1 R.sup.mr ; Compute Y.sub.1 = {tilde over (L)}A.sub.1,A.sub.2 = Y.sub.1; Y.sub.2 = {tilde over (L)}.sup.TY.sub.1,Y.sub.1 = {tilde over (L)}Y.sub.2 Calculate QR decomposition Y.sub.1 = Q.sub.1R.sub.1 , Y.sub.2 = Q.sub.2R.sub.2 ; If rank(A.sub.2.sup.TY.sub.1)< r then r = rank(A.sub.2.sup.TY.sub.1), go to the first step; end if; L.sub.t = Q.sub.1[R.sub.1(A.sub.2.sup.TY.sub.1).sup.1 R.sub.2.sup.T].sup.1/(2q+1) Q.sub.2.sup.T ; S.sub.t =custom-character .sub.(X L.sub.t) , is the nonzero subset of the first k largest entries of |X L.sub.t| ;custom-character .sub. is the projection of a matrix to . End while

    [0057] It can be see that the first method is a special case of the second method when S=0.

    [0058] Once the speckle noise reduced image has been determined as described above, a post-process via filtering can be applied for de-noising in step 5. Step 5 may involve, for example, enhanced Lee, adaptive Wiener filters, etc. In this embodiment, an adaptive Wiener filter is used as an example. In addition, we also apply a threshold to assist the de-noising. Mathematically, we apply the following process:

    [00005] L ( x , y ) = { T low L ( x , y ) < T low L ( x , y ) T low L ( x , y ) T high T high L ( x , y ) > T high ,

    [0059] Where T.sub.low and T.sub.high are two adaptively determined thresholds.

    [0060] FIG. 4 shows an example of image recovery according to method 1 described above. The original image X is decomposed into the recovered image L and the noise N.

    [0061] FIG. 5 shows an example of image recovery according to method 2 described above. The original image X is decomposed into the recovered image L a sparse part S and the noise N.

    [0062] To evaluate the image quality, the contrast to noise ratio (CNR) is computed, which measures the contrast between image features and noise.

    [00006] CNR = ( 1 / R ) .Math. .Math. 1 R .Math. ( r - b ) / r 2 + b 2

    [0063] A total of 20 subjects are tested. Table 1 shows the results by a baseline approach and the two proposed methods using different noise models. In the first noise model we assume X=L+N and the second model refers to line 24 page 10 where we assume X=L+N+S. Although the second model gives higher CNR as shown in Table 1, we notice that the visual effect of the two models are similar.

    TABLE-US-00003 TABLE 1 Objective measurement Image Registration 96 slices Image J Motion Baseline (Topcon) 13.78 14.07 X = L + N 14.42 14.90 X = L + S + N 14.43 15.65

    [0064] The baseline approach relies on the estimation of noise while the averaging relies on the cancel out among noise.

    [0065] FIG. 6 shows the plot of the CNR with different numbers of slices. FIG. 6a shows results obtained using the Image J noise model. FIG. 6b shows results obtained using the Motion noise model. As we can observe, the proposed methods using matrix completion outperform the baseline method.

    [0066] In the embodiment described above, we solve the low rank recovery using bilateral random projection. It is faster than traditional algorithms such as robust principal component analysis (PCA). It takes about 5 s (algorithm 1) to 45 s (algorithm 2) to recover an image of 9921024 pixels from 96 slice of 9921024 pixels in a dual core 3.0 GHz PC with 3.25 GB RAM. Robust PCA requires 100 s for the same task for algorithm 1 and 15 mins for algorithm 2.