표제지
목차
국문요약 9
Abstract 11
Ⅰ. 서론 13
Ⅱ. 머신러닝과 임베디드 프로세서 16
2.1. 경사 하강법 16
2.2. 지도 학습과 비지도 학습 21
2.3. 사전학습 딥러닝 모델 25
2.3.1. LeNet5 26
2.3.2. AlexNet 32
2.3.3. ResNet 37
2.3.4. ImageNet 데이터 세트 41
2.4. ARM Cortex 임베디드 프로세서 43
1. FPU, SRAM, FLASH 44
2. MPU 45
3. Register 45
4. CMSIS와 HAL 라이브러리 47
Ⅲ. 유전 알고리즘 기반 선택적 콘볼루션 방법 49
3.1. 유전 알고리즘 49
3.2. TSP 문제와 유전 알고리즘 54
3.3. 오차 역전파 59
3.4. 유전 알고리즘 기반 선택적 콘볼루션 방법의 제안 67
Ⅳ. 유전 알고리즘 기반 콘볼루션 연산 실험 및 결과 80
Ⅴ. 결론 95
참고문헌 97
Fig. 2-1. Gradient descent 17
Fig. 2-2. Flow chart of gradient descent implementation 18
Fig. 2-3. Graph derived by calculating with gradient descent 20
Fig. 2-4. Machine learning 21
Fig. 2-5. Supervised learning using the translating labeled training sets 22
Fig. 2-6. Clustering and classification in unsupervised learning 22
Fig. 2-7. Architecture of LeNet5 26
Fig. 2-8. Convolution operation process 27
Fig. 2-9. Image size change after 5×5 filter operation 28
Fig. 2-10. Map table_1 of C3 layer 29
Fig. 2-11. Activation functions 29
Fig. 2-12. Architecture of AlexNet 32
Fig. 2-13. Non-overlapping pooling and overlapping pooling methods 35
Fig. 2-14. Standard neural network structure(a) and neural network structure with drop-out applied(b) 36
Fig. 2-15. Architectures for ImageNet 38
Fig. 2-16. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer "plain" networks. 38
Fig. 2-17. Residual block proposed in ResNet 39
Fig. 2-18. Test datasets of MNIST and CIFAR-10 41
Fig. 2-19. Three-stage pipeline structure of Cortex 44
Fig. 3-1. Flowchart of genetic algorithm 50
Fig. 3-2. Example of a genetic algorithm 52
Fig. 3-3. Simulation result of binary genetic algorithm 53
Fig. 3-4. Example of PMX 55
Fig. 3-5. Dealing with duplicate visits to descendants 56
Fig. 3-6. Examples of visiting routes to visit 20 cities 56
Fig. 3-7. Calculation result of PMX algorithm 57
Fig. 3-8. Shortest path calculated by PMX algorithm 58
Fig. 3-9. Examples of ANN 59
Fig. 3-10. Forward propagation process 60
Fig. 3-11. Back propagation step 1 63
Fig. 3-12. Back propagation step 2 65
Fig. 3-13. Example of the MNIST data set 67
Fig. 3-14. Proposed genetic algorithm-based selective convolution method 68
Fig. 3-15. 28×28 size image of number 5 69
Fig. 3-16. 2D matrix value of number 5 69
Fig. 3-17. Gaussian distribution 70
Fig. 3-18. 2D array made by Gaussian distribution for 5×5 size 71
Fig. 3-19. Random mask selected by the Gaussian mean method 72
Fig. 3-20. Convolution operation and its result 72
Fig. 3-21. 2 Padding-applied image convolution operation 73
Fig. 3-22. kernel maps and parameters 74
Fig. 3-23. Random mask applied to a 28×28 image 75
Fig. 3-24. Flowchart of random pixel disable_1 76
Fig. 3-25. Flowchart of a feature map crossover 77
Fig. 3-26. Flowchart of random pixel disable_2 78
Fig. 4-1. Experimental system configuration diagram 80
Fig. 4-2. STM32CubeIDE development environment 81
Fig. 4-3. Numeric data for evaluation 82
Fig. 4-4. USART monitoring 83
Fig. 4-5. Computation time when applying 7% pixel disable_1 84
Fig. 4-6. Computation time when applying 5% pixel disable_1 85
Fig. 4-7. Computation time when applying 3% pixel disable_1 85
Fig. 4-8. Computation time when applying 7% pixel disable_2 86
Fig. 4-9. Computation time when applying 5% pixel disable_2 86
Fig. 4-10. Computation time when applying 3% pixel disable_2 87
Fig. 4-11. Computation accuracy when applying 7% pixel disable 88
Fig. 4-12. Computation accuracy when applying 5% pixel disable 89
Fig. 4-13. Computation accuracy when applying 3% pixel disable 89
Fig. 4-14. Comparison of computation time for proposed algorithm and traditional algorithm 91
Fig. 4-15. Comparison of power consumption of the traditional algorithm and the proposed algorithm 94