Title Page
Abstract
Contents
1. Introduction 10
2. Background 13
2.1. GAN 14
2.1.1. Generative Adversarial Networks 14
2.1.2. Conditional GAN 15
2.2. Variational Auto-Encoder 16
2.3. Score-based generative model and diffusion model 16
2.3.1. Noise Conditional Score Networks (NCSNs) 17
2.3.2. Denoising Diffusion Probabilistic Models (DDPMs) 18
2.3.3. Score SDE 20
2.3.4. Consistency generation 21
2.3.5. Controllable generation 22
2.3.6. Latent Diffusion Models (LDMs) 25
3. High-bit depth generation in computed tomography 26
3.1. Dataset and model architecture 26
3.2. Results of GAN 27
3.3. Results of score-based diffusion model 29
4. Data augmentation with isocitrate dehydrogenase type in glioma 32
4.1. Materials and Methods 33
4.1.1. Dataset 33
4.1.2. IDH mutation status and image preprocessing 35
4.1.3. Imaging phenotype 36
4.2. Results 38
4.2.1. Evaluation by human readers 38
4.2.2. Deep learning-based prediction of IDH type using the real data and nonselective GMA 40
4.2.3. Deep learning-based prediction of IDH type using imaging phenotype-based GMA according to tumor size 41
4.2.4. Deep learning-based prediction of IDH type using imaging phenotype-based GMA according to CE 43
5. Image-to-image translation with H&E staining normalization of whole slide imaging 46
5.1. Dataset 47
5.2. Stain normalization without stain separation 47
5.3. Stain normalization with stain separation 48
5.4. Performance evaluation criterion 49
5.5. Result 50
5.5.1. Quantitative and Qualitative Results 50
5.5.2. Result of overlapping 53
6. 3D generation in brain CT 54
6.1. Material and Methods 55
6.1.1. Adjacent Slice-based Conditional Iterative Inpainting, ASCII 55
6.1.2. Result of ASCII 57
6.1.3. Result of ASCII in 12-bit whole range 57
6.1.4. Intensity Calibration Network 58
6.1.5. Dataset and model architecture 61
6.2. Experiments 62
6.2.1. Results of intensity calibration network 62
6.2.2. Results of ASCII with IC-Net in whole range 64
6.2.3. Quantitative Evaluation 65
6.2.4. Qualitative Evaluation 67
7. Post-surgery imaging generation in cephalogram 68
7.1. Material and Methods 69
7.1.1. Datasets 69
7.1.2. Model Architecture 70
7.2. Surgical Movement Prediction 72
7.3. Post-surgery imaging generation 72
8. Discussion 76
9. Conclusion 77
Reference 77
국문 요약 88
Figure 1. Overview of three types of generative models. 14
Figure 2. Result of StyleGAN2 and StyleGAN3. Both GAN models generate images well in the whole range, but different artifacts are observed when clipping in the windowing range. 28
Figure 3. This is the feature map extracted from the discriminator using the CT images in the whole range. Low-signal regions, such as parenchyma, are not generated properly. 29
Figure 4. Results according to σmin and σmax. CV: coefficient of variation.[이미지참조] 30
Figure 5. Results of coefficient of variation according to σmin and σmax.[이미지참조] 31
Figure 6. Flow diagram of the training, model development, and internal and external testing. AMC=Asan Medical Center; TCGA=The Cancer Genome Atlas. 36
Figure 7. Histogram of tumor size in the real data. A large tumor size was defined as that above the 75th percentile of tumor size and a small tumor as below the 25th percentile.[이미지참조] 38
Figure 8. Representative cases of fully automated IDH-mutation classification with class activation maps (CAM) for paired contrast-enhanced T1-weighted images and FLAIR images.... 45
Figure 9. The results of stain normalization w/o stain separation in 09, 12 and 15 years. Images of left column is original, and right is normalized image. Red boxes show the regions where it... 48
Figure 10. Flow chart of the method using hematoxylin and eosin processes. Using sparse non-negative matrix factorization (SNMF), histological images separate to hematoxylin and... 49
Figure 11. Results of stain normalization with whole slide image of 09, 12 and 15 years, and two external datasets, Camelyon and PAIP. 51
Figure 12. The results of various overlapping ratio. γratio refers to the overlapping ratio. 54
Figure 13. The progress of score-based diffusion model for continuous K slices. We selected contiuous K slices among total N s slices. To generate a continuous K slices, the score-based... 55
Figure 14. (Up) Results of ASCII trained in windowing range with sagittal and coronal views. (Down) Results of ASCII trained in windowing range with axial view. 57
Figure 15. (Left) Results of ASCII with axial view in whole and windowing range. (Right) Results of ASCII with sagittal and coronal view in whole and windowing range. 58
Figure 16. Result of post-processing. First row is generated by ASCII and second and third row are post-processing of first row using IC-Net and histogram matching, respectively. 60
Figure 17. The overview of ASCII(2) with IC-Net. We set seed x⁰ to fill with intensity of air (-1024 HU) and channel mask Λ. x¹ was generated by using seed x0. The brain CT volume... 61
Figure 18. Data flow of diagram and process of curation. 62
Figure 19. The difference of between GT and prediction by each slice number. All slices were normalized from [-150HU, 150HU] to [-1, 1] and the mean average error was shown as... 63
Figure 20. Result of IC-Net with slices shifted by fixed values. We used a fixed value from 0.7 to 1.3. Using a seed and shifted slice for intensity calibration, the difference map with GT... 64
Figure 21. Result of ASCII(2) and ASCII(3) with and without IC-Net, and. And last column is result of bone rendering using 3D Slicer 65
Figure 22. The overview of DentalNet. (Top) We utilize a surgical movement prediction (SMP) model composed of an image embedding module and a graph-based module to predict the... 71
Figure 23. Results of the predicted post-surgery images. A and B represent the pre-surgery image and post-surgery image, respectively. Additionally, the prompts written at the top of... 73