SYSTEM, METHOD AND COMPUTER-ACCESSIBLE MEDIUM FOR CLASSIFYING BREAST TISSUE USING A CONVOLUTIONAL NEURAL NETWORK
20200364855 ยท 2020-11-19
Inventors
Cpc classification
G16H50/20
PHYSICS
A61B5/7264
HUMAN NECESSITIES
G16H50/30
PHYSICS
G06F18/21
PHYSICS
G06T3/40
PHYSICS
G06T2207/10101
PHYSICS
A61B2576/02
HUMAN NECESSITIES
G06T2207/10096
PHYSICS
International classification
A61B5/00
HUMAN NECESSITIES
G06T3/40
PHYSICS
Abstract
An exemplary system, method and computer-accessible medium for classifying a breast tissue(s) a patient(s) can include, for example, receiving an image(s) of an internal portion(s) of a breast of the patient(s), and automatically classifying the breast tissue(s) of the breast by applying a neural network(s) to the image(s). The automatic classification can include a classification as to whether the breast tissue(s) is atypical ductal hyperplasia or ductal carcinoma. The automatic classification can include a classification as to whether the breast tissue(s) is a cancerous tissue or a non-cancerous tissue. The image(s) can be a mammographic image or an optical coherence tomography image.
Claims
1. A non-transitory computer-accessible medium having stored thereon computer-executable instructions for classifying at least one breast tissue of at least one patient, wherein, when a computer arrangement executes the instructions, the computer arrangement is configured to perform procedures comprising: receiving at least one image of at least one internal portion of a breast of the at least one patient; and automatically classifying the at least one breast tissue of the breast by applying at least one neural network to the at least one image.
2. The computer-accessible medium of claim 1, wherein the automatic classification includes a classification as to whether the at least one breast tissue is at least one of atypical ductal hyperplasia or ductal carcinoma.
3. The computer-accessible medium of claim 1, wherein the automatic classification includes a classification as to whether the at least one breast tissue is a cancerous tissue or a non-cancerous tissue.
4. The computer-accessible medium of claim 1, wherein the at least one image is a mammographic image.
5. The computer-accessible medium of claim 1, wherein the at least one image is an optical coherence tomography image.
6. The computer-accessible medium of claim 1, wherein the neural network is a convolutional neural network (CNN).
7. The computer-accessible medium of claim 6, wherein the CNN includes a plurality of layers.
8. The computer-accessible medium of claim 7, wherein the layers include (i) a plurality of residual layers, (ii) a plurality of inception layers, (iii) at least one fully connected layer, and (iv) at least one linear layer.
9. The computer-accessible medium of claim 8, wherein (i) the residual layers include at least four residual layers, (ii) the inception layers include at least four inception layers, (iii) the at least one fully connected layer includes at least sixteen neurons, and (iv) the at least one linear layer includes at least eight neurons.
10. The computer-accessible medium of claim 7, wherein the layers include (i) a plurality of combined convolutional and rectified linear unit (ReLu) layers, (ii) a plurality of partially strided convolutional layers, (iii) a plurality of ReLu layers, and (iv) a plurality of fully connected layer.
11. The computer-accessible medium of claim 10, wherein (i) the combined convolutional and ReLu layers include at least three combined convolutional and ReLu layers, (ii) the partially strided convolutional layers include at least three partially strided convolutional layers, (iii) the ReLu layers include at least three ReLu layers, and (iv) the fully connected layer includes at least 15 fully connected layers.
12. The computer-accessible medium of claim 1, wherein the computer arrangement is further configured to determine at least one score based on the at least one image using the at least one neural network.
13. The computer-accessible medium of claim 12, wherein the computer arrangement is configured to automatically classify the breast tissue based on the score.
14. The computer-accessible medium of claim 13, wherein the computer arrangement is configured to automatically classify the breast tissue based on the score being above 0.5.
15. The computer-accessible medium of claim 1, wherein the at least one image illustrates at least one excised breast tissue.
16. The computer-accessible medium of claim 1, wherein the computer arrangement is further configured to segment and resize the at least one image prior to classifying the breast tissue.
17. The computer-accessible medium of claim 1, wherein the computer arrangement is further configured to perform a batch normalization on the at least one image.
18. The computer-accessible medium of claim 17, wherein the computer arrangement is configured to perform the batch normalization so as to limit a drift of layer activations.
19. A method for classifying at least one breast tissue of at least one patient, comprising: receiving at least one image of at least one internal portion of a breast of the at least one patient; and using a computer arrangement, classifying the at least one breast tissue of the breast by applying at least one neural network to the at least one image.
20-36. (canceled)
37. A system for classifying at least one breast tissue of at least one patient, comprising: a computer hardware arrangement configured to: receive at least one image of at least one internal portion of a breast of the at least one patient; and classify the at least one breast tissue of the breast by applying at least one neural network to the at least one image.
38.-54. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] Further objects, features and advantages of the present disclosure will become apparent from the following detailed description taken in conjunction with the accompanying Figures showing illustrative embodiments of the present disclosure, in which:
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044] Throughout the drawings, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the present disclosure will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments and is not limited by the particular embodiments illustrated in the figures and the appended claims.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0045] The exemplary system, method, and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can include the classification of breast tissue (e.g., as a tissue type) using various exemplary imaging modalities. For example, the exemplary system, method, and computer-accessible medium is described below using mammographic images and/or OCT images. However, the exemplary system, method, and computer-accessible medium can also be utilized on other suitable imaging modalities, including, but not limited to, magnetic resonance imaging, positron emission tomography, ultrasound, and computed tomography.
Exemplary Distinguishing Atypical Ductal Hyperplasia from Ductal Carcinoma In Situ
[0046] In order to distinguish atypical ductal hyperplasia from ductal carcinoma in situ, two groups were defined. A pure ADH group includes 67 patients who presented with suspicious calcifications without an associated mass on mammogram; had two craniocaudal (CC) and mediolateral/lateromedial (ML/LM)) magnification views available; and underwent stereotactic guided core biopsy yielding ADH and subsequent surgical excision yielding ADH without upgrade to DCIS. A DCIS group includes 82 patients who presented with suspicious calcifications without an associated mass on mammogram; had two magnification views available; and underwent stereotactic guided core biopsy yielding ADH with subsequent surgical excision yielding upgrade to DCIS (34 patients); underwent stereotactic guided core biopsy yielding ADH and DCIS (21 patients); or stereotactic guided core biopsy yielding DCIS with subsequent surgical excision yielding DCIS without invasion (27 patients).
[0047] Clinical pathologic data were collected including age, size and pathology result. Statistical analysis was performed using the IBM SPSS software. Descriptive statistics were used to summarize clinical, imaging, and pathologic parameters. Mammograms were performed on dedicated mammography units (Senographe Essential, GE Healthcare). The views obtained consisted of the standard mediolateral oblique (MLO) and CC views. Additional magnification views were obtained of the calcifications in CC and ML/LM projections.
Exemplary Data Preparation
[0048] The ground truth label was extracted from the original pathology report and the data was split into ADH and DCIS groups. Then, the cases were randomly separated into training/validation set, which included 80% of the data, and a test set, which included 20% of the data. The training/validation set was used to develop the exemplary network. The test set, which was set aside prior to training, was used for testing the diagnostic performance of the exemplary procedure.
Exemplary Data Augmentation and Segregation
[0049] The magnification views of each patient's mammogram were loaded into a 3D segmentation program. Segmentations were manually extracted encompassing the regions of the magnification view which contained calcifications by a fellowship trained breast radiologist with 8 years of experience. Each image was scaled in size based on the radius of the segmentations and resized to fit a 128128 pixel bounding box. Exemplary atypical ductal hyperplasia input images are shown in
Exemplary Network Architecture
[0050]
[0051] For example, a fully connected layer 320 with 16 neurons can be used after the 13th hidden layer followed by a linear layer 325 with 8 neurons. A final Softmax output layer 330 with two classes was inserted as the last layer. Training was implemented using the Adam optimizer (see, e.g., Reference 15), combined with the Nesterov accelerated gradient. (See, e.g., References 16 and 17). Parameters were initialized using a suitable heuristic. (See, e.g., Reference 18). L2 regularization was implemented to prevent over-fitting of data by limiting the squared magnitude of the kernel weights. Dropout (e.g., 25% randomly) was also employed to prevent over-fitting by limiting unit co-adaptation. (See, e.g., Reference 19). Batch normalization was utilized to improve network training speed and regularization performance by reducing internal covariate shift. (See, e.g., Reference 20).
[0052] Softmax with cross entropy hinge loss was utilized as an exemplary objective function of the network to provide a more intuitive output of normalized class probabilities. A class sensitive cost function penalizing incorrect classification of the underrepresented class was utilized. A final softmax score threshold of 0.5 from the average of raw log its from the ML and CC view was used for two class classification. Area under curve (AUC) was employed as the performance metric. Sensitivity, specificity and accuracy were also calculated as secondary performance metrics.
[0053] Visualization of network predictions was performed using the gradient-weighted class activation mapping (e.g., Grad-CAM). (See, e.g., Reference 21). Each Grad-CAM map was generated by the exemplary prediction model along with every input image. Thus, the salient region of the averaged Grad-CAM map can provide information as to whether these features come from when the prediction model makes classification decisions.
Exemplary Results
[0054] The average age of patients in the ADH group was 55.7 years (SD, 12.9 years). The average age of patients in the DCIS group was 62.1 years (SD, 11.3 years). The differences in age between the two groups was significant (p=0.006). The average size of mammographic calcifications extent of ADH was 1.02 cm (SD, 1.19 cm). The average size of mammographic calcifications extent of DCIS was 1.27 cm (SD, 0.9 cm). The differences in size between the two groups was not significant (p=0.13).
[0055] All of the patients underwent stereotactic guided core needle biopsy with a 9 gauge needle. ADH group patients had an average of 9.8 core samples obtained per biopsy (SD 2.5 cores). DCIS group patients had an average of 8.9 core samples obtained per biopsy (SD 2.9 cores). The number of cores between the two groups was not significantly different (p=0.14). DCIS grade was as follows: Low/intermediate grade (48) and high grade (34).
[0056] In total, 298 unique images representing ML and CC magnification views of calcifications from 149 patients were used for the exemplary CNN procedure (134 images from 67 patients in the ADH group and 164 images from 82 patients in the DCIS group). The network was trained for 300 epochs. For the test set, the area under the receiver operating curve (e.g., AUC) was 0.86 (95% CI0.03). Aggregate sensitivity and specificity was 84.6% (95% CI4%) and 88.2% (95% CI3%) respectively. Diagnostic accuracy was measured at 86.7% (95% CI, 2.9).
[0057]
Exemplary Discussion
[0058] The exemplary results indicate that the exemplary system, method, and computer-accessible medium can distinguish ADH from DCIS using an exemplary CNN, which yielded 86.7% diagnostic accuracy using a mammographic image data set.
[0059] Prior groups have identified various clinical, mammographic and/or histologic features to predict for occult malignancy. (See, e.g., References 1, 4, and 7). In a cohort of 140 patients, it was found that that removal of less than 95% of calcifications in the absence of an associated mass, involvement of 2 or more terminal ductal lobular units, the presence of necrosis or significant cytologic atypia, all predicted malignancy. (See, e.g., Reference 1). Using suitable criteria, a cohort of 125 patients with low-risk ADH was selected and observed. (See, e.g., References 1 and 5). At a median follow-up of 3 years, breast cancer events were identified in only 5.6% of the observed group, for example, compared to 12% in a separate intervention group.
[0060] In the largest retrospective study conducted over a nine-year period at a single institution of 13,488 consecutive biopsies yielding 422 biopsies with ADH in 415 patients, found that ipsilateral breast symptom, mammographic lesions other than microcalcifications alone, the use of 14 G core-needle biopsy, the presence of severe ADH, co-diagnosis of papilloma and diagnosis of ADH by a pathologist with lower volume independently predicted for malignancy upgrade. They found that even after selection for a low-risk cohort of women, the malignant upgrade frequency at the time of surgery was unacceptably high (17.2% versus 31.3% in all-comers). (See, e.g., Reference 7). Despite large number of studies on this topic, the results can be variable and to date there is no consensus in the selection of low-risk women who can safely undergo observation after a biopsy diagnosis of ADH.
[0061] The diagnosis of ADH remains a diagnostic challenge among pathologists, and significant inter-observer variability has been reported. (See, e.g. Reference 11). CNNs have been used in the histopathologic classification of breast biopsy lesions to increase the accuracy and efficiency of diagnosis, and have reported accuracy rates of >80% using relatively small data sets. (See, e.g., References 22 and 23). However, pathology specimen can be limited by the amount of tissue obtained either by core biopsy or surgery.
[0062] Other breast imaging modality such as MRI can have a potential role in distinguishing ADH from malignancy. A recent study in 2017 showed patients without suspicious enhancement on breast MRI can be followed rather than undergo surgical excision given the high negative predictive value. (See, e.g., Reference 24). Despite potential for breast MRI for this assessment, typically patients diagnosed with atypia do not undergoes routine breast MRI. As such the study by Tsuchiya only had 17 patients. In addition, the MRI is generally performed after the biopsy and can limit the interpretive value due to post biopsy changes as well as significant removal of the targeted lesion.
[0063] In contrast to prior methods, the exemplary system, method, and computer-accessible medium can be used on patients who have mammographic images, which can be prior to the biopsy, to facilitate comprehensive analysis. The exemplary system, method, and computer-accessible medium can utilize a CNN to classify breast cancer lesions based on a mammographic image data set, and further demonstrates the significant potential for radiomics with the utilization of CNNs to change clinical practice. The exemplary system, method, and computer-accessible medium can distinguish ADH from DCIS with 86.7% accuracy using a mammographic dataset. Given the widespread use of screening mammograms, the exemplary system, method, and computer-accessible medium can be used to determine patient management such that patients predicted to have pure ADH lesions can undergo imaging surveillance rather than surgery.
Exemplary Breast Tissue Classification in Optical Coherence Tomography
Exemplary Tissue Collection
[0064] De-identified human breast tissues from mastectomy and breast reduction specimens were excised from patients. The specimens included both normal and non-neoplastic tissues, and were not needed for diagnosis as defined by the Department of Pathology. The specimens were imaged within 24 hours of surgical excision. Average specimen size was 1.2 cm..sup.2
Exemplary Imaging Protocol
[0065] A custom in-house ultrahigh-resolution OCT (UHR-OCT) system centered at 840 nm with an axial resolution of 2.7 m and lateral resolution of 5.5 m measured in air was utilized. (See, e.g., Reference 39). The OCT volume included 800 by 800 pixels in the lateral directions covering 3 mm by 3 mm area, and 1024 pixels in the axial direction covering 1.78 mm in depth. All specimens were imaged fresh at room temperature.
Exemplary Histology
[0066] After imaging, tissue specimens were placed in 10% formalin for 24 hours, and then transferred to 70% ethanol for histology processing. Specimen blocks were embedded and sliced along the OCT imaging direction. Multiple 5 m-thick slices were taken from a single specimen block, with 100 m discarded between levels, and each slide stained with Hematoxylin and eosin (H&E). The processed slides were digitalized at 40 magnification. ImageScope software was used to view and annotate histology images. Histology findings were evaluated by a pathologist with more than 20 years of experience. The dataset of specimens is listed in Table 1 below.
TABLE-US-00001 TABLE 1 Distribution of tissue types in dataset. 46 specimens from 23 patients were imaged with a custom UHR-OCT system. 17 specimens were normal tissue, and 29 specimens were cancer specimens. Characteristic Value (n) Number of patients 23 Number of specimens 46 Specimen histological confirmations Normal 17 Cancer 29 IDC 24 DCIS 3
Exemplary Image Labeling
[0067]
Exemplary Deep Learning Procedure
[0068] The exemplary deep learning procedure utilized a customized hybrid 2D/1D CNN to map each 2D B-scan to a 1D label vector, which was derived from manual annotation, with a single tissue label class assigned to each A-line in the B-scan.
Exemplary Training
[0069] Before training, a two-step pre-processing procedure was used. The original 3-dimensional image volumes were resampled such that each single slice was 256200 pixels. A simple z-score transformation (xmean/S.D. of volume) was used to normalize each volume. Second, parameters were initialized using a suitable heuristic. (See, e.g., Reference 57).
[0070] Training datasets were generated from the exemplary OCT images with the corresponding labeling. Exemplary training was implemented using an Adam optimizer, a procedure for first-order gradient-based optimization of stochastic objective functions (see, e.g., Reference 58), and standard stochastic gradient descent procedure with Nesterov momentum. (See, e.g., Reference 59). L2 regularization was implemented, e.g., to prevent over-fitting of data by limiting the squared magnitude of the kernel weights. To account for training dynamics, the learning rate was annealed and the mini-batch size was increased whenever training loss plateaus. A normalized gradient procedure was utilized to facilitate locally adaptive learning rates that can adjust according to changes in the input signal. (See, e.g., Reference 60).
Exemplary Validation and Visualization
[0071] The exemplary classification procedure was executed on the validation set and evaluated for accuracy for each tissue type. Given the relatively small number of image volumes, but the relatively large number of B-scans per volume, each volume was divided into multiple 200-slice blocks for training and validation. Five-fold cross-validation was used to estimate accuracy over the entire dataset. Correlation with manual annotations was calculated using a Dice similarity coefficient.
Exemplary Results
[0072] An exemplary manual segmentation of 29,440,000 A-lines from 36,800 OCT B-scans in 46 volumetric datasets were used for training and validation. The annotated images were randomly divided into a training set, which included 80% of the images, and a validation set, which included 20% of the images, and then five-fold cross-validation was performed to ensure that all data was tested in the validation dataset. In each exemplary experiment, 23,552,000 A-lines were used as the training set, and the remaining 5,888,000 A-lines were used for cross-validation. Each B-scan was divided into chunks of 200 A-lines, and each chunk was then randomly divided into training and validation. The procedure was trained over 25,000 iterations, which took about 60 minutes.
[0073] Four different breast tissue structures were classified using the exemplary CNN. Examples of these features of breast tissue in OCT images and corresponding H&E histology are illustrated in the exemplary images shown in
[0074] A performance of the exemplary procedure was evaluated using Dice coefficients and the convergence of the procedure was plotted over multiple iterations. The Dice coefficient is a measure of similarity between two samples, and can be commonly used to assess the performance of image segmentation procedures. The exemplary images were manually annotated by the OCT readers, and considered to be the ground truth, and then the exemplary CNN was used the classify the images, and the similarity between the annotations was calculated for the entire dataset. The mean five-fold validation Dice coefficient was highest for IDC (e.g., mean standard deviation, about 0.890.09) and adipose (about 0.790.17), followed by stroma (about 0.740.18), and DCIS (about 0.650.15). (See e.g., Table 2 below). IDC and DCIS were combined as single class (e.g., cancer), and adipose and stroma were combined as the non-cancer class, for the case where deep learning can be used to identify images with suspicious areas that need to be investigated further. Using this binary classification, the mean five-fold validation Dice coefficient for cancer was about 0.880.04, and about 0.840.06 for non-cancer. The convergence of the binary classification is shown in the graphs shown in
TABLE-US-00002 TABLE 2 Distribution of tissue types in dataset and corresponding five-fold cross-validation Dice scores. Binary classification Five-fold five-fold cross-validation cross-validation Tissue Type Dice scores Dice scores IDC 0.82-0.95 0.84-0.94 DCIS 0.54-0.75 Adipose 0.67-0.91 0.81-0.93 Stroma 0.61-0.86
Exemplary Discussion
[0075] The exemplary system, method, and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can utilize an exemplary CNN that achieved Dice coefficients of 0.89-0.93 in a binary classification of detecting cancerous versus non-cancerous tissue in OCT images of breast specimens. Thus, the exemplary system, method, and computer-accessible medium according to an exemplary embodiment of the present disclosure can be used for deep learning for intraoperative margin assessment of breast cancer and to reduce re-excision rates. IDC and adipose were the easiest to classify, followed by DCIS and stroma. IDC attenuates the OCT strongly, and has an easily recognizable characteristic appearance, and adipose has a distinct honeycomb structure which can also be easier to identify than stroma and DCIS, which have more subtle features.
[0076] The exemplary system, method, and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can increase an accuracy when compared to other image classification frameworks developed for detecting breast cancer in OCT images. The exemplary system, method, and computer-accessible medium was able to achieve, using a relevance vector machine to classify IDC and surrounding stroma, an overall accuracy of 84% using data from the same UHR-OCT system. (See, e.g., Reference 39). The binary classification using the exemplary deep learning procedure performed better than traditional image processing procedures. Additionally, classifying OCT images of breast tissue has been investigated in a multi-reader study. (See, e.g., Reference 42). The exemplary CNN had comparable results to the 0.88 accuracy of 7 clinician readers combined, including radiologists, pathologists, and surgeons.
[0077] The exemplary system, method, and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used as a procedure for improving clinical decision making in the intraoperative setting. Exemplary OCT techniques can differentiate normal breast parenchyma such as lactiferous ducts, glands, adipose, and lobules, as well as pathologic conditions such as DCIS, IDC, and microcalcifications. (See, e.g., Reference 41). In a multi-reader study, clinicians (e.g., radiologists, surgeons, and pathologists) were trained to distinguish suspicious from non-suspicious areas of post-lumpectomy specimens using OCT images, and the results showed that readers from different specialties could accurately read OCT images with relatively short training time (e.g., 3.4 hours). Radiologists achieved the highest accuracy (94%) followed by pathologists and surgeons. All clinical readers had an average accuracy of 88%. These results further validated the feasibility of the exemplary CNN to use OCT as a real-time intraoperative margin assessment tool in BCS. Although clinicians can be trained to read OCT images, there remain practical concerns of high interobserver variability and slow speed, which make manual interpretation impractical for the intraoperative setting.
[0078] Thus, the exemplary system, method, and computer-accessible medium can utilize an exemplary CNN to classify cancer in OCT images of breast based on A-line based classification procedures that can be used in real-time applications, and can be extended beyond breast imaging to other applications. Automated processing using the exemplary CNN can overcome challenges of interobserver variability and improve speed in OCT image interpretation. The exemplary CNN facilitates the use of OCT in an intraoperative setting for margin assessment.
Exemplary OCT-Based Post-Surgical Breast Tumor Specimen Margin Evaluation
[0079] As shown in the exemplary images of
Exemplary Tissue Collection
[0080] As indicated in Table 3 below, de-identified normal and non-neoplastic human breast tissues from mastectomy and breast reduction specimens were excised from patients.
TABLE-US-00003 TABLE 3 Characteristics of specimens Characteristics Value Number of patients 49 Number of specimens 82 Specimens imaged by UHR OCT, n (%) IDC 20 (38.5) DCIS 3 (5.8) Phyllodes 2 (3.8) Fibrotic focus carcinoma 1 (1.9) Mucinous carcinoma 3 (5.8) Normal 23 (44.2) Specimens imaged by Thorlabs Telesto, n (%) IDC 7 (23.3) ILC 3 (10.0) DCIS 3 (10.0) Normal 17 (56.6)
Exemplary Imaging Protocol and Histology
[0081] Two spectral-domain OCT systems were used for imaging: (i) Thorlabs Telesto I centered at 1300 nm (e.g., axial resolution: 6.5 m; lateral resolution: 15 m in air) and (ii) a Custom UHR-OCT system centered at 800 nm (e.g., axial resolution: 2.7 m; lateral resolution: 5.5 m in air). Specimens were imaged fresh and submitted for histology. Exemplary histology was evaluated by a pathologist and OCT images were evaluated by authors using corresponding histology.
Exemplary OCT Image Labeling
[0082] Each A-line was labeled for six tissue types: (i) IDC, (ii) DCIS, (iii) mucinous carcinoma, (iv) Phyllodes sarcoma, (v) stroma, (vi) adipose. Labeling procedures used a custom graphical user interface (GUI). Two volumes/patient were labeled for 23 patients, resulting in 37 k B-scans and 29.5 million A-lines.
Exemplary CNN Architecture
[0083] The exemplary CNN utilized a hybrid 2D/1D CNN to map each B-scan to a 1D label vector derived from manual annotation. The exemplary CNN was implemented using an exemplary 11-layer architecture consisting of a series of 33 convolutional kernels. Non-linear functions modeled by the ReLU. Batch normalization was used between the convolutional and ReLU layers to limit drift of layer activations during training. Feature channel sizes increased from 4 to 64 with increasing convolutional depth reflecting increasing complexity.
[0084]
Exemplary Training
[0085] Annotated exemplary images were randomly divided into a training set, which include 80% of the images, and a validation set, which included 20% of the images. Training was implemented using the Adam optimizer. L2 regularization was implemented to prevent over-fitting of data by limiting the squared magnitude of the kernel weights. To account for training dynamics, the learning rate was annealed and the mini-batch size was increased whenever training loss plateaued. An exemplary normalized gradient procedure was utilized to facilitate locally adaptive learning rates that adjust with changes in input.
Exemplary Validation and Visualization
[0086] The exemplary CNN was performed on the validation set and was evaluated for accuracy for each tissue type. Each volume was divided in 200-slice blocks for training and validation. Five-fold cross-validation was used to estimate accuracy over the entire dataset. Correlation with manual annotations was calculated using a Dice score coefficient:
Exemplary Results
[0087] A total of 30 optical imaging volumes resulting in 26,172 slices were used for preliminary training. For each slice, a total of four tissue types were annotated on a column-by-column basis. The distribution of tissue types was a follows:
TABLE-US-00004 Distribution of Tissue Type tissue types IDC 38% DCIS 4.4% Adipose 13% Stroma 30% N/A 14%
Five-fold cross validation yielded Dice scores across the tissue types as follows:
TABLE-US-00005 Five-fold cross-validation Tissue Type Dice score IDC 0.82-0.95 DCIS 0.54-0.75 Adipose 0.67-0.91 Stroma 0.61-0.86
[0088] In a second experiment, IDC and DCIS were combined as a single tissue class (e.g., malignancy) while stroma and adipose were combined as a second tissue class (e.g., non-malignancy). In this setup, binary of differentiation of malignancy from non-malignant tissues yielded five-fold cross-validation Dice scores of 0.85-0.92 as shown in the graphs of
[0089]
[0090]
[0091] As shown in
[0092] Further, the exemplary processing arrangement 1505 can be provided with or include an input/output ports 1535, which can include, for example a wired network, a wireless network, the internet, an intranet, a data collection probe, a sensor, etc. As shown in
[0093] The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures which, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various different exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art. In addition, certain terms used in the present disclosure, including the specification, drawings and claims thereof, can be used synonymously in certain instances, including, but not limited to, for example, data and information. It should be understood that, while these words, and/or other words that can be synonymous to one another, can be used synonymously herein, that there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.
EXEMPLARY REFERENCES
[0094] The following references are hereby incorporated by reference in their entireties: [0095] 1. Nguyen C V, Albarracin C T, Whitman G J, et al. Atypical ductal hyperplasia in directional vacuum-assisted biopsy of breast microcalcifications: considerations for surgical excision. Ann Surg Oncol 18:752-61, 2011. [0096] 2. Sinn H P, Kreipe H. A Brief Overview of the WHO Classification of Breast Tumors, 4th Edition, Focusing on Issues and Updates from the 3rd Edition. Breast Care (Basel) 8:149-54, 2013. [0097] 3. Racz J M, Carter J M, Degnim A C. Lobular Neoplasia and Atypical Ductal Hyperplasia on Core Biopsy: Current Surgical Management Recommendations. Ann Surg Oncol, 2017 [0098] 4. Ko E, Han W, Lee J W, et al. Scoring system for predicting malignancy in patients diagnosed with atypical ductal hyperplasia at ultrasound-guided core needle biopsy. Breast Cancer Res Treat 112:189-95, 2008. [0099] 5. Menen R S, Ganesan N, Bevers T, et al. Long-Term Safety of Observation in Selected Women Following Core Biopsy Diagnosis of Atypical Ductal Hyperplasia. Ann Surg Oncol 24:70-76, 2017. [0100] 6. Pankratz V S, Hartmann L C, Degnim A C, et al. Assessment of the accuracy of the Gail model in women with atypical hyperplasia. J Clin Oncol 26:5374-9, 2008. [0101] 7. Deshaies I, Provencher L, Jacob S, et al. Factors associated with upgrading to malignancy at surgery of atypical ductal hyperplasia diagnosed on core biopsy. Breast 20:50-5, 2011. [0102] 8. Bendifallah S, Defert S, Chabbert-Buffet N, et al. Scoring to predict the possibility of upgrades to malignancy in atypical ductal hyperplasia diagnosed by an 11-gauge vacuum-assisted biopsy device: an external validation study. Eur J Cancer 48:30-6, 2012. [0103] 9. Yu Y H, Liang C, Yuan X Z. Diagnostic value of vacuum-assisted breast biopsy for breast carcinoma: a meta-analysis and systematic review. Breast Cancer Res Treat 120:469 79, 2010. [0104] 10. Song J L, Chen C, Yuan J P, et al. Progress in the clinical detection of heterogeneity in breast cancer. Cancer Med 5:3475-3488, 2016. [0105] 11. Gomes D S, Porto S S, Balabram D, et al. Inter-observer variability between general pathologists and a specialist in breast pathology in the diagnosis of lobular neoplasia, columnar cell lesions, atypical ductal hyperplasia and ductal carcinoma in situ of the breast. Diagn Pathol 9:121, 2014. [0106] 12. LeCun, Yann, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86.11 (1998): 2278-2324. [0107] 13. He, Kaiming, et al. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [0108] 14. Szegedy, Christian, et al. Going deeper with convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. [0109] 15. Kingma, D P, B A J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). [0110] 16. Nesterov, Yurii. Gradient methods for minimizing composite objective function. (2007). [0111] 17. Dozat, Timothy. Incorporating nesterov momentum into adam. (2016). [0112] 18. Glorot, Xavier, and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. 2010. [0113] 19. Srivastava N, Hinton G E, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research 15.1 (2014): 1929-1958. [0114] 20. Ioffe, Sergey, and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. International Conference on Machine Learning. 2015. [0115] 21. Ramprasaath R S, Abhishek D, Ramakrishna V, et al. 2016 Grad-CAM: why did you say that? visual explanations from deep networks via gradient-based localization CVPR 2016 (arXiv:1610.02391) [0116] 22. Araujo T, Aresta G, Castro E, et al. Classification of breast cancer histology images using Convolutional Neural Networks. PLoS One 12:e0177544, 2017 [0117] 23. Bejnordi B E, Zuidhof G, Balkenhol M, et al. Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. J Med Imaging (Bellingham) 4:044504, 2017 [0118] 24. Tsuchiya K, Mori N, Schacht D, et al. Value of breast MRI for patients with a biopsy showing atypical ductal hyperplasia (ADH). J Magn Reson Imaging. 2017 December; 46(6):1738-1747. [0119] 25. K. B. Clough, J. S. Lewis, B. Couturaud, A. Fitoussi, C. Nos, and M. C. Falcou, Oncoplastic techniques allow extensive resections for breast-conserving therapy of breast carcinomas, Annals Surg. 237, 26-34 (2003). [0120] 26. P. I. Tartter, J. Kaplan, I. Bleiweiss, C. Gajdos, A. Kong, S. Ahmed, and D. Zapetti, Lumpectomy margins, reexcision, and local recurrence of breast cancer, The Am. J. Surg. 179, 81-85 (2000). [0121] 27. L. E. McCahill, R. M. Single, E. J. A. Bowles, H. S. Feigelson, T. A. James, T. Barney, J. M. Engel, and A. A. Onitilo, Variability in Reexcision Following Breast Conservation Surgery, JAMA: The J. Am. Med. Assoc. 307, 467-475 (2012). [0122] 28. J. F. Waljee, E. S. Hu, L. A. Newman, and A. K. Alderman, Predictors of re-excision among women undergoing breast-conserving surgery for cancer, Annals Surg. Oncol. 15, 1297-1303 (2008). [0123] 29. M. A. Olsen, K. B. Nickel, J. A. Margenthaler, A. E. Wallace, D. Mines, J. P. Miller, V. J. Fraser, and D. K. Warren, Increased Risk of Surgical Site Infection Among Breast-Conserving Surgery Re-excisions, Annals Surg. Oncol. 22, 2003-2009 (2015). [0124] 30. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, A. Et, and et al., Optical coherence tomography. Science 254, 1178-81 (1991). [0125] 31. M. Adhi and J. S. Duker, Optical coherence tomography-current and future applications, Curr. Opin. Ophthalmol. 24, 213-221 (2013). [0126] 32. C. A. Puliafito, M. R. Hee, C. P. Lin, E. Reichel, J. S. Schuman, J. S. Duker, J. A. Izatt, E. A. Swanson, and J. G. Fujimoto, Imaging of Macular Diseases with Optical Coherence Tomography, Ophthalmology 102, 217-229 (1995). [0127] 33. I.-K. Jang, B. E. Bouma, D.-H. Kang, S.-J. Park, S.-W. Park, K.-B. Seung, K.-B. Choi, M. Shishkov, K. Schlendorf, E. Pomerantsev, S. L. Houser, H. Aretz, and G. J. Tearney, Visualization of coronary atherosclerotic plaques in patients using optical coherence tomography: comparison with intravascular ultrasound, J. Am. Coll. Cardiol. 39, 604-609 (2002). [0128] 34. T. Kubo, T. Imanishi, S. Takarada, A. Kuroi, S. Ueno, T. Yamano, T. Tanimoto, Y. Matsuo, T. Masho, H. Kitabata, K. Tsuda, Y. Tomobuchi, and T. Akasaka, Assessment of Culprit Lesion Morphology in Acute Myocardial Infarction, J. Am. Coll. Cardiol. 50, 933-939 (2007). [0129] 35. W. Luo, F. T. Nguyen, A. M. Zysk, T. S. Ralston, J. Brockenbrough, D. L. Marks, A. L. Oldenburg, and S. A. Boppart, Optical Biopsy of Lymph Node Morphology using Optical Coherence Tomography, Technol. Cancer Res. & Treat. 4, 539-547 (2005). [0130] 36. F. T. Nguyen, A. M. Zysk, E. J. Chaney, J. G. Kotynek, J. Uretz, F. J. Bellafiore, K. M. Rowland, P. A. Johnson, and S. A. Boppart, Intraoperative Evaluation of Breast Tumor Margins with Optical Coherence Tomography, Cancer Res. 69, 8790-8796 (2009). [0131] 37. K. M. Kennedy, R. A. McLaughlin, B. F. Kennedy, A. Tien, B. Latham, C. M. Saunders, and D. D. Sampson, Needle optical coherence elastography for the measurement of microscale mechanical contrast deep within human breast tissues, J. Biomed. Opt. 18, 121510 (2013). [0132] 38. L. Scolaro, R. A. McLaughlin, B. F. Kennedy, C. M. Saunders, and D. D. Sampson, A review of optical coherence tomography in breast cancer, Photonics & Lasers Medicine 3 (2014). [0133] 39. X. Yao, Y. Gan, E. Chang, H. Hibshoosh, S. Feldman, and C. Hendon, Visualization and tissue classification of human breast cancer images using ultrahigh-resolution OCT, Lasers Surg. Medicine 49, 258-269 (2017). [0134] 40. B. J. Vakoc, D. Fukumura, R. K. Jain, and B. E. Bouma, Cancer imaging by optical coherence tomography: preclinical progress and clinical potential, Nat. Rev. Cancer 12, 363 (2012). [0135] 41. P. Hsiung, D. R. Phatak, Y. Chen, A. D. Aguirre, J. G. Fujimoto, and J. L. Connolly, Benign and malignant lesions in the human breast depicted with ultrahigh resolution and three-dimensional optical coherence tomography. Radiology 244, 865-74 (2007). [0136] 42. R. Ha, L. C. Friedlander, H. Hibshoosh, C. Hendon, S. Feldman, S. Ahn, H. Schmidt, M. K. Akens, M. Fitzmaurice, B. C. Wilson, and V. L. Mango, Optical Coherence Tomography, Acad. Radiol. 25, 279-287 (2018). [0137] 43. A. R. Triki, M. B. Blaschko, Y. M. Jung, S. Song, H. J. Han, S. I. Kim, and C. Joo, Intraoperative margin assessment of human breast tissue in optical coherence tomography images using deep neural networks, (2017). [0138] 44. C. Lee, A. Tyring, N. Deruyter, Y. Wu, A. Rokem, and A. Lee, Deep-learning based, automated segmentation of macular edema in optical coherence tomography, Biomed. Opt. Express 8 (2017). [0139] 45. F. G. Venhuizen, B. van Ginneken, B. Liefers, F. van Asten, V. Schreur, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sanchez, Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography, Biomed. Opt. Express 9, 1545-1569 (2018). [0140] 46. J. De Fauw, J. R. Ledsam, B. Romera-Paredes, S. Nikolov, N. Tomasev, S. Blackwell, H. Askham, X. Glorot, B. OAZDonoghue, D. Visentin, G. van den Driessche, B. Lakshminarayanan, C. Meyer, F. Mackinder, S. Bouton, K. Ayoub, R. Chopra, D. King, A. Karthikesalingam, C. O. Hughes, R. Raine, J. Hughes, D. A. Sim, C. Egan, A. Tufail, H. Montgomery, D. Hassabis, G. Rees, T. Back, P. T. Khaw, M. Suleyman, J. Cornebise, P. A. Keane, and O. Ronneberger, Clinically applicable deep learning for diagnosis and referral in retinal disease, Nat. Medicine (2018). [0141] 47. J. Wang, X. Yang, H. Cai, W. Tan, C. Jin, and L. Li, Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning, Sci. Reports 6 (2016). [0142] 48. R. Ha, P. Chang, J. Karcich, S. Mutasa, E. Pascual Van Sant, M. Z. Liu, and S. Jambawalikar, Convolutional Neural Network Based Breast Cancer Risk Stratification Using a Mammographic Dataset, Acad. Radiol. (2018). [0143] 19. R. Ha, P. Chang, S. Mutasa, J. Karcich, S. Goodman, E. Blum, K. Kalinsky, M. Z. Liu, and S. Jambawalikar, Convolutional Neural Network Using a Breast MRI Tumor Dataset Can Predict Oncotype Dx Recurrence Score, J. Magn. Reson. Imaging 0 (2018). [0144] 50. R. Ha, P. Chang, J. Karcich, S. Mutasa, E. P. Van Sant, E. Connolly, C. Chin, B. Taback, M. Z. Liu, and S. Jambawalikar, Predicting Post Neoadjuvant Axillary Response Using a Novel Convolutional Neural Network Algorithm, Annals Surg. Oncol. (2018). [0145] 51. R. Ha, P. Chang, J. Karcich, S. Mutasa, R. Fardanesh, R. T. Wynn, M. Z. Liu, and S. Jambawalikar, Axillary Lymph Node Evaluation Utilizing Convolutional Neural Networks Using MRI Dataset, J. Digit. Imaging (2018). [0146] 52. B. E. Bejnordi, M. Veta, P. J. Van Diest, B. Van Ginneken, N. Karssemeijer, G. Litjens, J. A. Van Der Laak, M. Hermsen, Q. F. Manson, M. Balkenhol, O. Geessink, N. Stathonikos, M. C. Van Dijk, P. Bult, F. Beca, A. H. Beck, D. Wang, A. Khosla, R. Gargeya, H. Irshad, A. Zhong, Q. Dou, Q. Li, H. Chen, H. J. Lin, P. A. Heng, C. HaB, E. Bruni, Q. Wong, U. Halici, M. A. Oner, R. Cetin-Atalay, M. Berseth, V. Khvatkov, A. Vylegzhanin, O. Kraus, M. Shaban, N. Rajpoot, R. Awan, K. Sirinukunwattana, T. Qaiser, Y. W. Tsang, D. Tellez, J. Annuscheit, P. Hufnagl, M. Valkonen, K. Kartasalo, L. Latonen, P. Ruusuvuori, K. Liimatainen, S. Albarqouni, B. Mungal, A. George, S. Demirci, N. Navab, S. Watanabe, S. Seno, Y. Takenaka, H. Matsuda, H. A. Phoulady, V. Kovalev, A. Kalinovsky, V. Liauchuk, G. Bueno, M. M. Fernandez-Carrobles, I. Serrano, O. Deniz, D. Racoceanu, and R. Venancio, Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer, JAMAJ. Am. Med. Assoc. 318, 2199-2210 (2017). [0147] 53. D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks, in Medical Image Computing and Computer-Assisted InterventionMICCAI 2013, K. Mori, I. Sakuma, Y. Sato, C. Barillot, and N. Navab, eds. (Springer Berlin Heidelberg, Berlin, Heidelberg, 2013), pp. 411-418. [0148] 54. J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, Striving for Simplicity: The All Convolutional Net, arXiv [cs.LG] (2014). [0149] 55. V. Nair and G. E. Hinton, Rectified Linear Units Improve Restricted Boltzmann Machines, Proc. 27th Int. Conf. on Mach. Learn. pp. 807-814 (2010). [0150] 56. S. Ioffe and C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, arXiv:1502.03167 pp. 1-11 (2015). [0151] 57. K. He, X. Zhang, S. Ren, and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in Proceedings of the IEEE International Conference on Computer Vision, vol. 2015 Inter (2015), pp. 1026-1034. [0152] 58. D. P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, IEEE Signal Process. Lett. (2014). [0153] 59. Y. Bengio, N. Boulanger-Lewandowski, and R. Pascanu, Advances in optimizing recurrent networks, in ICASSP, IEEE International Conference on Acoustics, Speech and Signal ProcessingProceedings, (2013), pp. 8624-8628. [0154] 60. D. P. Mandic, A generalized normalized gradient descent algorithm, (2004). [0155] 61. J. Landercasper, E. Whitacre, A. C. Degnim, and M. Al-Hamadani, Reasons for Re-Excision After Lumpectomy for Breast Cancer: Insight from the American Society of Breast Surgeons Mastery<sup>SM</sup> Database, Annals Surg. Oncol. 21, 3185-3191 (2014). [0156] 62. J. F. Waljee, E. S. Hu, L. A. Newman, and A. K. Alderman, Predictors of Breast Asymmetry after Breast-Conserving Operation for Breast Cancer, J. Am. Coll. Surg. 206, 274-280 (2008). [0157] 63. K. Simiyoshi, T. Nohara, M. Iwamoto, S. Tanaka, K. Kimura, Y. Takahashi, Y. Kurisu, M. Tsuji, and N. Tanigawa, Usefulness of intraoperative touch smear cytology in breast-conserving surgery, Exp. Ther. Medicine 1, 641-645 (2010). [0158] 64. J. C. Cendn, D. Coco, and E. M. Copeland, Accuracy of intraoperative frozen-section analysis of breast cancer lumpectomy-bed margins, J. Am. Coll. Surg. 201, 194-198 (2005). [0159] 65. T. E. Doyle, R. E. Factor, C. L. Ellefson, K. M. Sorensen, B. J. Ambrose, J. B. Goodrich, V. P. Hart, S. C. Jensen, H. Patel, and L. A. Neumayer, High-frequency ultrasound for intraoperative margin assessments in breast conservation surgery: a feasibility study, BMC Cancer 11, 444 (2011). [0160] 66. S. Goldfeder, D. Davis, and J. Cullinan, Breast Specimen Radiography. Can It Predict Margin Status of Excised Breast Carcinoma? Acad. Radiol. 13, 1453-1459 (2006). [0161] 67. F. Schnabel, S. K. Boolbol, M. Gittleman, T. Karni, L. Tafra, S. Feldman, A. Police, N. B. Friedman, S. Karlan, D. Holmes, S. C. Willey, M. Carmon, K. Fernandez, S. Akbari, J. Harness, L. Guerra, T. Frazier, K. Lane, R. M. Simmons, A. Estabrook, and T. Allweis, A randomized prospective study of lumpectomy margin assessment with use of marginprobe in patients with nonpalpable breast malignancies, Annals Surg. Oncol. 21, 1589-1595 (2014). [0162] 68. Z. Burgansky-Eliash, G. Wollstein, T. Chu, J. D. Ramsey, C. Glymour, R. J. Noecker, H. Ishikawa, and J. S. Schuman, Optical coherence tomography machine learning classifiers for glaucoma detection: a preliminary study. Investig. ophthalmology & visual science 46, 4147-52 (2005). [0163] 69. R. J. Zawadzki, A. R. Fuller, D. F. Wiley, B. Hamann, S. S. Choi, and J. S. Werner, Adaptation of a support vector machine algorithm for segmentation and visualization of retinal structures in volumetric optical coherence tomography data sets, J. Biomed. Opt. 12, 041206 (2007). [0164] 70. A. Abdolmanafi, L. Duong, N. Dandah, and F. Cheriet, Deep feature learning for automatic tissue classification of coronary artery using optical coherence tomography, Biomed. Opt. Express 8, 1203 (2017). [0165] 71. G. Zahnd, A. Karanasos, . Gijs Van Soest, E. Regar, W. Niessen, F. Gijsen, T. Van Walsum, A. Karanasos, . E. Regar, G. Van Soest, and A. F. Gijsen, Quantification of fibrous cap thickness in intracoronary optical coherence tomography with a contour segmentation method based on dynamic programming, Int J CARS 10, 1383-1394 (2015). [0166] 72. A. Coates, A. Arbor, and A. Y. Ng, An Analysis of Single-Layer Networks in Unsupervised Feature Learning, Aistats 2011 pp. 215-223 (2011). [0167] 73. G. Marcus, Deep Learning: A Critical Appraisal, arXiv preprint arXiv:1801.00631 pp. 1-27 (2018). [0168] 74. Clough, K. B. et al. Oncoplastic techniques allow extensive resections for breast-conserving therapy of breast carcinomas. Ann. Surg. 237, 26-34 (2003). [0169] 75. Tartter, P. I. et al. Lumpectomy margins, reexcision, and local recurrence of breast cancer. Am. J. Surg. 179, 81-85 (2000). [0170] 76. Cendn, J. C., Coco, D. & Copeland, E. M. Accuracy of intraoperative frozen-section analysis of breast cancer lumpectomy-bed margins. J. Am. Coll. Surg. 201, 194-198 (2005). [0171] 77. Goldfeder, S., Davis, D. & Cullinan, J. Breast Specimen Radiography. Can It Predict Margin Status of Excised Breast Carcinoma? Acad. Radiol. 13, 1453-1459 (2006). [0172] 78. Schnabel, F. et al. A randomized prospective study of lumpectomy margin assessment with use of marginprobe in patients with nonpalpable breast malignancies. Ann. Surg. Oncol. 21, 1589-1595 (2014). [0173] 79. Ha, R. et al. Optical Coherence Tomography: A Novel Imaging Method for Post-lumpectomy Breast Margin AssessmentA Multi-reader Study. Acad. Radiol. (2017). doi:10.1016/j.acra.2017.09.018 [0174] 80. Yao, X., Gan, Y., Marboe, C. C. & Hendon, C. P. Myocardial imaging using ultrahigh-resolution spectral domain optical coherence tomography. J. Biomed. Opt. 21, 061006 (2016). [0175] 81. Brady, A. P. Error and discrepancy in radiology: inevitable or avoidable? Insights Imaging 8, 171-182 (2017). [0176] 82. LeCun, Y. A., Bengio, Y. & Hinton, G. E. Deep learning. Nature 521, 436-444 (2015). [0177] 83. Krizhevsky, A., Sutskever, I. & Geoffrey E., H. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 25 1-9 (2012). doi:10.1109/5.726791