표제지
목차
I. 서론 13
1. 연구의 필요성 및 목적 13
2. 연구의 방법 16
II. 관련 연구 18
1. 연구대상 18
가. 치매 18
나. 딥러닝 적용 알츠하이머 진단 및 예측 20
2. CNN 구조모델 연구 24
가. CNN(Convolutional Neural Network) 24
나. LeNet-5 29
다. AlexNet 31
라. VGG 34
마. GoogLeNet(InceptionNet) 36
바. ResNet 39
사. Inception-ResNet v2 40
아. OpenCV(Open Source Computer Vision) 40
자. 소결 42
3. 데이터 전처리 방식 43
가. Grayscale 43
나. Morphological Opening 44
다. CLAHE(Contrast Limited Adaptive Histogram Equalization) 46
라. Invert 47
마. Resize 49
바. Gaussian Smoothing 50
사. Overlap 51
아. Multiply 52
III. Deep Learning 기반 퇴행성 뇌질환 예측 대상 및 연구방법 54
1. 연구대상 및 변수 54
2. 사용된 CNN 구조모델 56
3. 구조모델 제작 환경 및 제작 방법 57
4. 데이터 전처리 60
5. 데이터 분석 64
가. Grayscale 64
나. Morphological Opening 65
다. CLAHE(Contrast Limited Adaptive Histogram Equalization) 66
라. Invert 68
마. Overlap 69
바. Multiply 70
IV. 알츠하이머 예측 Deep learning 학습 결과 72
1. 예측모델 성과 평가 72
가. VGG-19 72
나. Inception ResNet V2 73
다. 소결 74
2. 딥러닝 기반의 알츠하이머 MRI 분석 결과 76
가. VGG-19 78
나. Inception ResNet V2 90
V. 결론 102
1. 연구 결과 102
2. 향후 연구 104
요약 106
ABSTRACTS 110
REFERENCES 114
〈Table. III-1〉 Composition of Alzheimer's MRI image data 55
〈Table. III-2〉 Hardware Specifications and Operation System 58
〈Table. III-3〉 Pretreatment Combination 62
〈Table. IV-1〉 Alzheimer''s dataset original learning results 75
〈Table. IV-2〉 Classification prediction 76
〈Table. IV-3〉 Accuracy performance according to the preprocessing method of the VGG-19 model 88
〈Table. IV-4〉 Precision, Recall and F1 Score performance according to the preprocessing method of the VGG-19 model 89
〈Table. IV-5〉 Accuracy performance according to preprocessing method of InceptionResNet V2 model 100
〈Table. IV-6〉 Performance of Precision, Recall and F1 Score according to the preprocessing method of InceptionResNet V2 model 101
〈Table. V-1〉 Performance of applying overlap preprocessing method to VGG-19 model 104
(Fig. I-1) CNN Structure 16
(Fig. II-1) Progress of AD from MCI to severe AD 21
(Fig. II-2) Image Analysis Process Using CNN 25
(Fig. II-3) Categories of Deep Learning architectures 28
(Fig. II-4) An example of convolutional Neural Network(CNN) 28
(Fig. II-5) Architecture of LeNet 30
(Fig. II-6) Applied to 5x5 and 3x3 kernels for convolution 31
(Fig. II-7) Architecture of AlexNet 33
(Fig. II-8) Filter maps learned on each GPU using AlexNet 34
(Fig. II-9) VGG Neural Network Architecture 36
(Fig. II-10) GoogLeNet's Inception Module 37
(Fig. II-11) Structure of GoogLeNet 38
(Fig. II-12) Building Blocks for ResNet Residual Learning 40
(Fig. II-13) Architecture of Inception-ResNet V2 41
(Fig. II-14) RGB vs Grayscale image 45
(Fig. II-15) Original Image 46
(Fig. II-16) Image converted by applying Morphological 47
(Fig. II-17) Histogram Equalization Application Examples 48
(Fig. II-18) X-ray image 49
(Fig. II-19) Gaussian distribution 51
(Fig. II-20) Gaussian application method 52
(Fig. II-21) Images before and after applying Overlap 53
(Fig. II-22) Images before and after applying Multiply 53
(Fig. III-1) Non Demented 55
(Fig. III-2) Mild Demented 55
(Fig. III-3) Moderate Demented 56
(Fig. III-4) Very Mild Demented 56
(Fig. III-5) VGG algorithm structure based on CNN 57
(Fig. III-6) Configure Dataset 59
(Fig. III-7) Deep learning progress process 60
(Fig. III-8) Process of preprocessing 63
(Fig. III-9) Code for Grayscale Preprocessing 64
(Fig. III-10) Original image VS Grayscale image 65
(Fig. III-12) Original image VS Morphpological Opening image 66
(Fig. III-13) Code for CHAHE Preprocessing 67
(Fig. III-14) Original image VS CLAHE image 68
(Fig. III-15) Code for Invert Preprocessing 68
(Fig. III-16) Original image VS Invert image 69
(Fig. III-17) Code for Overlap Preprocessing 70
(Fig. III-18) Original image VS Overlap image 70
(Fig. III-19) Code for Multiply Preprocessing 70
(Fig. III-20) Original image VS Multiply image 71
(Fig. IV-1) Source for image data learning 72
(Fig. IV-2) Sources for training the VGG-19 model 73
(Fig. IV-3) Source for training Inception ResNet V2 model 74
(Fig. IV-4) Source for training Inception ResNet V2 model 74
(Fig. IV-5) Sources for validationg the trained model 75
(Fig. IV-6) Original using VGG19 78
(Fig. IV-7) Grayscale using VGG19 79
(Fig. IV-8) Morphology using VGG19 79
(Fig. IV-9) Grayscale + CLAHE using VGG19 80
(Fig. IV-10) Invert using VGG19 80
(Fig. IV-11) Overlap using VGG19 81
(Fig. IV-12) Multiply using VGG19 81
(Fig. IV-13) Grayscale + Morphology using VGG19 82
(Fig. IV-14) Grayscale + Invert using VGG19 82
(Fig. IV-15) Grayscale + Multiply using VGG19 83
(Fig. IV-16) Grayscale + Morphology + CLAHE using VGG19 83
(Fig. IV-17) Morphology + Invert using VGG19 84
(Fig. IV-18) Morphology + Multiply using VGG19 84
(Fig. IV-19) Grayscale + CLAHE + Invert using VGG19 85
(Fig. IV-20) Grayscale + CLAHE + Multiply using VGG19 85
(Fig. IV-21) Invert + Multiply using VGG19 86
(Fig. IV-22) Grayscale + Morphology + Invert using VGG19 86
(Fig. IV-23) Grayscale + Morphology + CLAHE + Invert using VGG19 87
(Fig. IV-24) Original using Inception Resnet V2 90
(Fig. IV-25) Grayscale using Inception Resnet V2 91
(Fig. IV-26) Morphology using Inception Resnet V2 91
(Fig. IV-27) Grayscale + CLAHE using Inception Resnet V2 92
(Fig. IV-28) Invert using Inception Resnet V2 92
(Fig. IV-29) Overlap using Inception Resnet V2 93
(Fig. IV-30) Multiply using Inception Resnet V2 93
(Fig. IV-31) Grayscale + Morphology using Inception Resnet V2 94
(Fig. IV-32) Grayscale + Invert using Inception Resnet V2 94
(Fig. IV-33) Grayscale + Multiply using Inception Resnet V2 95
(Fig. IV-34) Grayscale + Morphology + CLAHE using Inception Resnet V2 95
(Fig. IV-35) Morphology + Invert using Inception Resnet V2 96
(Fig. IV-36) Morphology + Multiply using Inception Resnet V2 96
(Fig. IV-37) Grayscale + CLAHE + Invert using Inception Resnet V2 97
(Fig. IV-38) Grayscale + CLAHE + Multiply using Inception Resnet V2 97
(Fig. IV-39) Invert + Multiply using Inception Resnet V2 98
(Fig. IV-40) Grayscale + Morphology + Invert using Inception Resnet V2 98
(Fig. IV-41) Grayscale + Morphology + CLAHE + Invert using Inception Resnet V2 99
(Fig. V-1) Compare accuracy score by model 105