Title Page
Abstract
Contents
Chapter 1. Introduction 18
1.1. Motivation and objectives 18
1.2. Contributions of the dissertation 25
1.3. Overview of the dissertation 27
Chapter 2. High-dimensional input space 28
2.1. Introduction 28
2.2. Methodologies 32
2.2.1. Multi-layer perceptron (MLP) 32
2.2.2. Autoencoder (AE) 33
2.2.3. Variational autoencoder (VAE) 34
2.3. Inverse design optimization framework 37
2.3.1. Two-step deep learning approach 37
2.3.2. Target distribution optimization 37
2.3.3. Active learning and transfer learning 38
2.4. Framework validation: optimization of the airfoil for wind turbine blades 41
2.4.1. Optimization of the airfoil in wind turbine blades 41
2.4.2. Architectures of the two-step deep learning models 46
2.4.3. Single-objective optimization results and discussion 49
2.4.4. Multi-objective optimization results and discussion 54
2.5. Summary 62
2.6. Additional results 64
Chapter 3. High-dimensional output space 65
3.1. Introduction 65
3.2. β-variational autoencoder (β-VAE) 71
3.3. Physics-aware reduced-order modeling 73
3.4. Numerical experiments 77
3.4.1. Data preparation 77
3.4.2. Training details 78
3.5. Results and discussion 82
3.5.1. Training results 82
3.5.2. Independence of LVs 84
3.5.3. Information intensity of LVs 85
3.5.4. Physics-awareness of LVs 90
3.5.5. Physics-aware ROM 96
3.6. Summary 102
3.7. Additional results 104
3.7.1. POD results 104
3.7.2. Scalability of extracting physics-aware LVs in practical problem 106
Chapter 4. Reliable and efficient uncertainty quantification 109
4.1. Introduction 109
4.2. Implementation and evaluation of DE 115
4.2.1. Deep ensembles (DE) 115
4.2.2. Uncertainty quality evaluation 119
4.2.3. Uncertainty calibration: STD scaling 124
4.3. Application of DE to aerodynamic performance regression task 128
4.3.1. Data preparation and training details 128
4.3.2. Evaluation of regression performance 130
4.3.3. Evaluation of UQ performance 132
4.3.4. Theoretical derivation: underconfidence of DE in regression tasks 134
4.4. DE models with STD calibration 138
4.4.1. STD calibration of DE models 138
4.4.2. Effects of STD calibration on Exploratory Behavior in Bayesian optimization 142
4.5. Summary 146
4.6. Additional results 149
4.6.1. Controversial issues on MC-dropout 149
4.6.2. Hyperparameter tuning results in Sec. 4.3.1 149
4.6.3. Additional results in Sec. 4.3.3 152
4.6.4. Additional results in Sec. 4.4.1 152
Chapter 5. Concluding remarks 155
5.1. Summary of the dissertation 155
5.2. Limitations of the dissertation 159
5.3. Embarking on a journey towards acceleration of 3D aerodynamic simulations 162
Chapter 6. References 163
국문 초록 190
Table 2.1. Design space of the six airfoil shape parameters: the baseline airfoil is selected as the median value of each range 44
Table 2.2. Flight conditions, objective functions, and constraints for single-objective and multi-objective optimizations 45
Table 2.3. Summary of the QoIs of the optimum solution 52
Table 2.4. Summary of the QoI of six selected Pareto solutions 57
Table 2.5. Nomenclatures of twelve points extracted to investigate the sharp changes in the QoI heatmaps 59
Table 3.1. Details of the blocks and layers of VAE/β-VAE used in this study. 79
Table 3.2. Network structure of the VAE/β-VAE used in this study. 79
Table 4.1. Optimized scaling factors for STD calibration 139
Table 4.2. Comprehensive comparison between GPR and DE-2 147
Table 4.3. Results of hyperparameter tuning: several structures of probabilistic NN used in the DE model are tested. 150
Table 4.4. Results of hyperparameter tuning: several GPR models are tested. 151
Figure 2.1. Flowchart of the two-step deep learning approach. 37
Figure 2.2. Flowchart of the inverse design optimization framework. 40
Figure 2.3. Comparison of pressure distributions from Xfoil and experimental results in Ref. [1] (adopted airfoil configuration is also visualized). 42
Figure 2.4. Shape parameters for airfoil representation: six PARSEC parameters are used. 43
Figure 2.5. Architecture of the VAE. 49
Figure 2.6. Convergence history of single-objective optimization with active learning. 50
Figure 2.7. Loss history of (a) MLP, and (b) VAE. For both models, the history of the first iteration and last (24th) iteration of active learning is represented. 50
Figure 2.8. Comparison of the baseline and optimum airfoil shape of single-objective optimization. 51
Figure 2.9. Comparison of the generated and calculated pressure distribution of the optimum airfoil (baseline pressure distribution is also included). 52
Figure 2.10. Comparison of 50 randomly selected Cₚ training data (black lines, a) and 50 generated Cₚ distributions by the VAE (red lines, b). The... 53
Figure 2.11. Loss history of (a) MLP, and (b) VAE. For both models, the history of the first iteration and last (59th) iteration of active learning is represented. 55
Figure 2.12. Pareto solutions of multi-objective optimization. The discontinuity in the Pareto solutions is due to Cd constraint violation.[이미지참조] 56
Figure 2.13. Airfoil shape comparison of six selected Pareto solutions. 56
Figure 2.14. Comparison of generated and calculated Cₚ distributions of six selected Pareto solutions. 58
Figure 2.15. Heatmaps of two objective functions within the latent space: (a) L/D and (b) area. Twelve points are selected to investigate the rapid change... 59
Figure 2.16. Cₚ distributions of 12 points selected in Fig. 2.15.[이미지참조] 61
Figure 2.17. Trends in the leading-edge radius (RL.E.) of 12 selected points in Fig. 2.15.[이미지참조] 61
Figure 2.18. Pareto solutions of multi-objective optimization without Cd constraint. For comparison with Fig. 2.12, six designs previously selected from...[이미지참조] 64
Figure 3.1. Overall structure of physics-aware reduced-order modeling. 73
Figure 3.2. Illustrative schematic showing the process of extracting physics-aware LVs by β-VAE: the ideal case is to extract the actual physical... 75
Figure 3.3. Computational grid used for the flow analysis; structured O-grid with a size of 512 × 256. 78
Figure 3.4. Structures of the AE and VAE/β-VAE. 78
Figure 3.5. Loss history of the trained AE/VAE/β-VAE models. 83
Figure 3.6. MSE and KL-divergence of the trained VAE/β-VAE models. 83
Figure 3.7. Reconstructed pressure fields of the trained models. 84
Figure 3.8. Absolute values of the components in the Pearson correlation matrix for LVs. 86
Figure 3.9. Determinants of Pearson correlation matrices for combinations of 2 to 7 LVs. 86
Figure 3.10. KL-divergence and Sobol results with respect to LVs from the training dataset. 88
Figure 3.11. Standard deviations of LVs from the training dataset. 89
Figure 3.12. Latent traversal plots of pressure flow fields for two extreme LVs: first (most dominant) and last (most trivial) LVs ranked by KL-divergence. 90
Figure 3.13. Investigation of physical features contained in the top two LVs: (a) distributions of training dataset and boundary data with respect to Ma... 93
Figure 3.14. The results of the single variable LR: (a) Ma=f(LVMa), and (b) AoA=f(LVAoA).[이미지참조] 95
Figure 3.15. Latent traversal plots of airfoil surface pressure distributions in 1000-VAE: (a) traversal of LVMa, and (b) traversal of LVAoA[이미지참조] 96
Figure 3.16. MSE of the regression models in ROM. 97
Figure 3.17. Comparison of the response surface of two LVs in the 1000-VAE: (a) physics-aware LV, (b) physics-unaware LV. 98
Figure 3.18. MSE of ROM prediction with the exclusion of kth LV.[이미지참조] 99
Figure 3.19. Comparison of prediction MSE between physics-aware ROM and physics-unaware ROM. 100
Figure 3.20. Pressure contour predicted from AE/β-VAE-based ROMs: (a) prediction, (b) absolute error. 101
Figure 3.21. Pressure contour predicted from POD-based ROM: (a) prediction, (b) absolute error. 105
Figure 3.22. Latent traversal plots of airfoil surface pressure distributions in POD: (a) traversal of 1st LV, and (b) traversal of 2nd LV.[이미지참조] 105
Figure 3.23. Preprocessed training dataset consisting of (a) 32 surface pressure values, (b) Cl, (c) Cd, and (d) Cₘ.[이미지참조] 107
Figure 3.24. Investigation of physical features contained in the top two LVs for sparse and noisy datasets. 108
Figure 4.1. Flowchart of Bayesian optimization. 111
Figure 4.2. Flowchart of DE approach. 119
Figure 4.3. (a) Illustration of well-calibrated/miscalibrated models: 60% CI of the well-calibrated model contains 60% of the test data, whereas that of the... 121
Figure 4.4. Illustration of error-based reliability plot. Underconfident model overestimates RMV relative to RMSE, while overconfident model... 124
Figure 4.5. Loss history of all trained models. NLL calculated by the test dataset is adopted the results of the hyperparameter tuning (Table 4.3 in Sec. 4.6.2). 129
Figure 4.6. Comparison of regression accuracy between GPR and DE-2: kernel density estimation (KDE) of test dataset with respect to NLL and RMSE... 131
Figure 4.7. Comparison of regression accuracy between GPR and all DE models: comprehensive results in terms of all aerodynamic QoIs. (a) NLL, (b) RMSE. 132
Figure 4.8. Reliability plots of GPR: (a) CI-based reliability plot, (b) Error-based reliability plot. 133
Figure 4.9. Reliability plots of DE: for simplicity, only the C SF results of different DE models are shown. (a) CI-based reliability plot, (b) Error-based... 135
Figure 4.10. Reliability plots of DE after STD calibration: (a) CI-based reliability plot, (b) Error-based reliability plot. The noticeable effects of STD... 140
Figure 4.11. AUCE and ENCE of DE models before and after STD calibration. Those of GPR are also shown for comparison. 141
Figure 4.12. CIs of 68% confidence level predicted by DE-16: comparison between before and after STD calibration. 143
Figure 4.13. Effects of STD calibration for DE models on Bayesian optimization results. 145
Figure 4.14. Reliability plots of vanilla DE models: (left) CI-based reliability plots, (right) error-based reliability plots. 153
Figure 4.15. Reliability plots of DE models after STD calibration: (left) CI-based reliability plots, (right) error-based reliability plots. 154