Title Page
Abstract
Contents
1. Introduction 17
1.1. Digital Pathology Framework 19
1.2. Whole Slide Imaging (WSI) 21
1.3. Thesis Motivation 24
1.4. Thesis Purpose 25
2. Literature Review 30
2.1. WSIs Processing 30
2.1.1. Digitization 30
2.1.2. Data Annotation 32
2.1.3. Stain Normalization 35
2.2. Deep Learning Model for WSIs 37
2.2.1. Supervised Learning Model 38
2.2.2. Unsupervised Learning Model 42
2.2.3. Attention Learning Model 46
2.3. Comparison with Our Research Work 48
3. Histopathological Classification of Breast Cancer Images Using a Multi-Scale Input and Multi-Feature Network 52
3.1. Motivation 52
3.2. Architecture of Multi-Scale Input and Multi-Feature Network 55
3.3. Datasets 60
3.3.1. ICIAR2018 61
3.3.2. BreakHis 62
3.4. Experimental Setup 64
3.4.1. Aspects of Performance Evaluation 64
3.4.2. Best Hyperparameters 65
3.4.3. Image Representation 67
3.5. Experimental Results 69
3.5.1. Classification Results 69
3.5.2. Ablation Studies of MSI-MFNet 77
3.5.3. Confusion Matrix Visualization 86
3.5.4. McNemar's Statistical Analysis 89
4. Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis 92
4.1. Motivation 92
4.2. Architecture of Proposed Model 96
4.2.1. Pre-Processing WSIs 96
4.2.2. Dataset-Preparation 99
4.2.3. Training and Classification 101
4.2.4. Model Architecture 104
4.3. Datasets 106
4.3.1. ICIAR2018 107
4.3.2. Dartmouth Lung Cancer 109
4.4. Experimental Setup 110
4.4.1. Aspects of Performance Evaluation 110
4.4.2. Best Hyperparameters 111
4.4.3. Performance Evaluation Metrics 112
4.5. Experimental Results 113
4.5.1. Comparison with Deep Learning Models 113
4.5.2. Comparison with Different Deep Encoders 116
4.5.3. Effect of Multiple Descriptors 117
4.5.4. Pathologist's Analysis of the Results 119
4.5.5. ROC Curves' Visualizations 124
5. Refined Attention Module for WSI Cancer Diagnosis 126
5.1. Motivation 126
5.2. Architecture of Proposed Model 128
5.2.1. Attention Module 129
5.3. Dataset 131
5.3.1. Biopsy Needle WSIs (bnWSIs) 131
5.4. Experimental Setup 133
5.4.1. Aspects of Performance Evaluation 133
5.4.2. Best Hyperparameters 134
5.5. Experimental Results 135
5.5.1. Comparison with Deep Learning Models 135
5.5.2. Comparison of Run Time Speeds 137
6. Conclusion & Future Work 141
References 146
Table 1. MSI-MFNet architecture. Note that each "Conv" layer shown in the table corresponds to the sequence BN-ReLU-... 60
Table 2. Structure of the ICIAR2018 with 200X magnification factor. 62
Table 3. Structure of the BreakHis dataset with four magnifications (40X, 100X, 200X, and 400X). 63
Table 4. Best hyper-parameters of the MSI-MFNet and DNet classification models. 66
Table 5. Patch-wise comparisons of the accuracy, sensitivity, and specificity metrics for the ICIAR2018 dataset. The best results are... 70
Table 6. Patch-wise comparisons of the accuracy, sensitivity, and specificity metrics for the BreakHis dataset. The magnification factor... 71
Table 7. Image-wise comparisons of the sensitivity and specificity metrics with respect to the ICIAR2018 dataset for the maximum... 74
Table 8. Image-wise comparisons of the sensitivity and specificity metrics on the BreakHis dataset for the maximum voting criterion.... 76
Table 9. Statistical significance from the standardized McNemar's test. 90
Table 10. Proposed model architecture. HRAW, HHOG, HLBP ∈ (Feature-X, Feature-X/K, K*Feature-X) with activation map....[이미지참조] 106
Table 11. Structure of the ICIAR2018 with 20X magnification factor. 108
Table 12. Structure of the Dartmouth Lung Cancer dataset (31 WSIs only) with 20X magnification factor. 110
Table 13. The best hyperparameters of our model. 112
Table 14. Comparison of four metrics on the ICIAR2018 for the binary-classification. The best results are shown in bold. 114
Table 15. Comparison of four metrics on the ICIAR2018 and Dartmouth datasets for the multi-classification. The best results are... 114
Table 16. Accuracy comparison of our encoder against literature encoders. The parenthesis values represent standard deviation. 117
Table 17. Structure of the bnWSIs dataset with different magnification levels. 133
Table 18. Comparison of metrics for binary-class classification of the bnWSIs dataset. 136
Table 19. Comparison of metrics for multi-class classification of the bnWSIs dataset. 137
Figure 1. Overview of the WSI conceptual pipeline, and processing interpretability. (Left) Following segmentation at a specific magnification... 21
Figure 2. Pyramid structure of WSI. 23
Figure 3. Deep learning pipeline for diagnosis 30
Figure 4. Block diagram of MSI-MFNet model. The blocks are DB: depth block (1-4); GAP: global average pooling (1-4); BN: batch... 56
Figure 5. Microscopic H&E images of four types of tumors in the ICIAR2018 dataset. The magnification factor of these images is 200X. 62
Figure 6. Four types of benign (first row) and malignant (second row) tumor images from the BreakHis dataset. The magnification... 63
Figure 7. Image-wise comparisons of the accuracy metric on the (a) ICIAR2018 and (b) BreakHis datasets for the three different voting... 73
Figure 8. Results of ablation studies on the BreakHis dataset for MSI-MFNet and DNet with maximum voting criteria. Using the four... 79
Figure 9. Results of ablation studies on the BreakHis dataset for binary and multi-class type classification with maximum voting... 82
Figure 10. Confusion matrices for image-wise classification for the ICIAR2018 dataset, for the maximum voting criteria. 87
Figure 11. Confusion matrices for image-wise classification for the BreakHis dataset, for the maximum voting criteria. 89
Figure 12. Block diagram of the proposed model. (A) Following segmentation and annotation, image patches are extracted from the... 97
Figure 13. Microscopic H&E patched images of four types in the ICIAR2018 dataset. 108
Figure 14. Histological images of five types of lung adenocarcinoma in the Dartmouth Lung Cancer dataset. 109
Figure 15. Accuracy for multi-class classification on the ICIAR2018 and Dartmouth datasets. Using the different combination of generated representations. 118
Figure 16. Confusion matrices for the multi-class classification of ICIAR2018 and Dartmouth datasets. Class labels are according to the... 121
Figure 17. Two-dimensional visualization of two different layers using t-SNE for the multi-class classification. Projection of the last... 123
Figure 18. AUC (ROC) curves for the multi-class classification of the ICIAR2018 and Dartmouth datasets. Class labels are according to... 125
Figure 19. Block diagram 야 the refined attention module. 130
Figure 20. Different types, and magnification levels of images in bnWSIs dataset. 132
Figure 21. Comparison of mean run time speeds without and with our attention module for binary-class (first row) and multi-class... 139