PROCESSING OPTICAL COHERENCE TOMOGRAPHY SCANS

20220304620 · 2022-09-29

Assignee

Inventors

Cpc classification

International classification

Abstract

A method of processing optical coherence tomography (OCT) scans through a subject's skin, the method comprising: receiving at least a plurality of OCT scans through the subject's skin (7), each scan representing an OCT signal in a slice through the subject's skin (7), with the plurality of OCT scans being spaced apart such that the OCT scans represent the OCT signal through a volume through the subject's skin; processing each OCT scan using a processor so as to so as to detect at least one three dimensional structure in the skin; classifying, using a processor, the structure as belonging to a class of a plurality of classes of structures; and outputting at least one of the structure and the class.

Claims

1. A method of processing optical coherence tomography (OCT) scans through a subject's skin, the method comprising: receiving at least a plurality of OCT scans through the subject's skin, each scan representing an OCT signal in a slice through the subject's skin, with the plurality of OCT scans being spaced apart such that the OCT scans represent the OCT signal through a volume through the subject's skin; processing each OCT scan using a processor so as to detect at least one three dimensional structure in the skin; classifying, using a processor, the structure as belonging to a class of a plurality of classes of structures; and outputting at least one of the structure and the class.

2. The method of claim 1, comprising generating a confidence with which the structure can be assigned to the class of structures, which may be output in the step of outputting.

3. The method of claim 2, comprising generating a confidence with which the structure could be assigned to a plurality of the classes in the plurality of classes.

4. The method of claim 1, comprising segmenting the subject's skin within the OCT scan into different components, in which the segmentation is of the volume and is carried out in three dimensions.

5. (canceled)

6. The method of claim 4, in which the segmentation determines the position of the structure relative to the components of the subject's skin and the step of classifying uses the position of the structure relative to the components of the subject's skin to classify the structure as belonging to a class.

7. The method of claim 1, in which the step of classifying the structure, and optionally the step of detecting the structure, comprises using a machine learning algorithm.

8. The method of claim 1, in which the OCT scans comprise dynamic OCT scans, in that they comprise time-varying OCT scans through the subject's skins, and the method comprises determining the position of blood vessels in the dynamic OCT scans and using the position of the blood vessels relative to the structure in the step of classifying the structure.

9. (canceled)

10. The method of any claim 1, comprising capturing at least one image of the surface of the user's skin, typically using a camera, and using each image to classify the structure.

11. The method of claim 10, in which the images are captured using at least one of visible light and infrared light.

12. (canceled)

13. The method of claim 10, comprising determining a pigment of the skin, and using the pigment to classify the structure.

14. The method of claim 1, comprising the determination of at least one numerical parameter of the skin and using each numerical parameter in classifying the structure.

15. The method of claim 14, in which the at least one numerical parameter comprises at least one parameter selected from the group comprising: optical attenuation coefficient, surface topography/roughness, depth of blood vessel plexus, and blood vessel density.

16. The method of claim 1, comprising: processing the OCT data for each scan with depth to produce a indicative depth scan representative of the OCT signal at each depth through all of the scans; fitting a curve to the indicative depth scan, the curve comprising a first term which exponentially decays with respect to the depth and a second term which depends on the noise in the OCT signal; and calculating a compensated intensity for the uncompensated OCT signal at each point through each scan, the compensated intensity comprising a ratio of a term comprising a logarithm of the OCT signal to a term comprising the fitted curve.

17. The method of claim 16, comprising using the compensated intensity to classify the structure.

18. The method of claim 17, in which the compensated intensity is used together with the uncompensated intensity in classifying the structure.

19. The method of claim 1, in which the plurality of classes of structures are selected from the group comprising: basal cell carcinoma (BCC); squamous cell carcinoma (SCC); actinic keratosis (AK); seborrheic keratosis (SebK); benign nevus; dysplastic nevus; hair follicle; cysts; sebaceous gland; and melanoma.

20. The method of claim 1, comprising allowing a user to select which structures are in the plurality of classes.

21. The method of claim 1, in which the step of outputting comprises outputting any features of the OCT scans which contributed as part of the classification of the structure as belonging to the class.

22. The method of claim 21, comprising allowing a user to reject some of the features which contributed, and repeating the step of classifying the structure without using the rejected features.

23. An optical coherence tomography (OCT) image processing apparatus, comprising a processor, a display coupled to the processor and storage coupled to the processor, the storage carrying program instructions which, when executed on the processor, cause it to carry out the method of claim 1.

24-26. (canceled)

Description

[0041] There now follows, by way of example only, description of an embodiment of the invention, described with reference to the accompanying drawings, in which:

[0042] FIG. 1a shows a selection of OCT scans through a stack captured with an OCT apparatus in accordance with an embodiment of the invention;

[0043] FIG. 1b shows a 3D projection of the stack of images from which the images of FIG. 1a were selected;

[0044] FIGS. 2a and 2b show the same selection of scans and 3D projection as FIGS. 1a and 1b, in which depth compensation has been applied by the OCT apparatus;

[0045] FIGS. 3a and 3b show the same selection of scans and 3D projection as FIGS. 1a and 1b, in which a binary morphology operation has been applied; and

[0046] FIG. 4 shows schematically the optical coherence tomography (OCT) apparatus used in FIGS. 1a to 3b.

[0047] An optical coherence tomography (OCT) apparatus in accordance with an embodiment of the invention is shown in FIG. 4 of the accompanying drawings. This comprises a computer 1, having a processor 2 and storage 3 (such as a mass storage device or random access memory) coupled to the processor 2. The storage 3 contains data and processor instructions which cause the processor 2 to act as is described below. The computer 1 can be any suitable model; typically a personal computer running an operating system such as Microsoft® Windows® or Apple® Mac OS X® can be used. The computer 1 is also provided with a display 4 controlled by the processor 2 on which any desired graphics can be displayed.

[0048] The apparatus further comprises an OCT interferometer 5 and associated probe 6. The interferometer 5 interferes light reflected from sample 7 (here, a subject's skin) through probe 6 with light passed along a reference path to generate interferograms. These are detected in the interferometer 5; the measured signal is then passed to the computer 1 for processing. Example embodiments of suitable OCT apparatus can be found in the PCT patent application published as WO2006/054116 or in the VivoSight® apparatus available from Michelson Diagnostics of Maidstone, Kent, United Kingdom.

[0049] Such OCT apparatus typically generate multiple B-scans: that is, scans taken perpendicularly through the skin 7. The result of analysis of each interferogram is a bitmap in which the width of the image corresponds to a direction generally parallel to the skin surface and the height corresponds to the depth from the sensor into the skin. By taking many parallel scans, a three-dimensional stack of bitmaps can be built up.

[0050] The processor can then be used to process the OCT scans taken. The probe is used to capture scans of the subject's skin and the data is transmitted to the image processor unit in the form of a ‘stack’ of B-scans. Each B-scan is an image of the skin at a small perpendicular displacement from the adjacent B-scan. Typically and preferably, the stack comprises 120 B-scan images, each of which is 6 mm wide by 2 mm deep, with a displacement between B-scans of 50 microns.

[0051] FIGS. 1a and 1b show representations of such an image stack. For clarity, only 6 of the 120 images are shown with spacing of 400 microns between them. In the sequence of images shown in FIG. 1a can be seen a tumour (the large ellipsoidal dark grey object) and also a large dilated blood vessel (smaller ellipsoidal dark grey object). In the sequence of images it is notable that the blood vessel appears in successive images at different x-positions, whereas the tumour does not. Those skilled in the art will be aware that in any individual image, it is not easy to distinguish the vessel ellipsoid from a tumour ellipsoid, based on shape alone; however in the sequence of displaced images the three-dimensional shape of the vessel can be discerned as a linear structure that is quite different from the irregular tumour shape.

[0052] In FIG. 1b, there is shown a 3D projection of the stack, with the vessel and tumour indicated by arrows.

[0053] These raw images are first processed to compensate for attenuation of OCT signal with depth and then filtered to reduce the amount of noise and image clutter. The optical attenuation compensation method is disclosed in the PCT application published as WO2017/182816. The filtering is the median filter which is well known to those skilled in the art. The resulting processed images are shown in FIGS. 2a and 2b, which respectively show a selection of images from the stack and a 3D projection of the stack.

[0054] The next step carried out is a binary threshold operation and further denoising of the image by performing an ‘open’ and ‘close’ binary morphology operation on the image stack. The results are shown in FIGS. 3a and 3b, which again respectively show a selection of images from the stack and a 3D projection of the stack.

[0055] This image stack is then processed to segment and extract all of the individual 3-dimensional objects present in the stack that have continguous interconnected regions. The features of each such object are extracted, namely the horizontal length and width, vertical size, total volume, surface area, depth of the object centroid, ellipticity and density. This is not an exhaustive list and others may also be found to be useful.

[0056] These object features are fed into a Random Forest machine learning algorithm, or alternatively a neural network, which has been taught with many such examples of scans of skin with benign or malignant growths. It will be appreciated that the vessel in this example has completely different feature results from the tumour and so it is easy to unambiguously distinguish the vessel from the tumour. The vessel object is elongated in 3-dimensional space and located deep in the dermis, whereas the tumour is irregularly shaped in three dimensions and located at a shallow depth in the dermis. By placing the vessel object into a class of deep, highly elongated objects, and the tumour object into a class of shallow, irregular objects, the classifier is able to correctly identify the former as a benign vessel and the latter as a malignant basal cell carcinoma.