Title Page
Abstract
Contents
Chapter 1. Introduction 14
1.1. Background and Problem Definition 15
1.2. Thesis Outline 22
Chapter 2. Related Work 23
2.1. Face Image Restoration 23
2.2. Image Quality Assessment 28
2.3. Knowledge Distillation 30
Chapter 3. Face Image Restoration using Adversarial Distillation of Facial Region Dictionary 34
3.1. Proposed Method 35
3.1.1. Teacher Model 35
3.1.2. Student Model 37
3.1.3. Objective Functions 38
3.2. Experiments 40
3.2.2. Quantitative Analysis 41
3.2.3. Qualitative Analysis 42
3.2.4. Effectiveness of Knowledge Distillation 45
Chapter 4. Interpretable Face Quality Assessment 48
4.1. Preliminary 48
4.2. Proposed Method 49
4.2.1. Generator for Image Restoration 49
4.2.2. Discriminator for Quality Assessment 51
4.2.3. Objective Functions 52
4.2.4. Assessment Protocol 54
4.3. Experiments 54
4.3.1. Implementation Details 54
4.3.2. Quantitative Analysis 55
4.3.3. Qualitative Analysis 59
4.4. In-depth Analysis 61
4.5. Discussion 63
Chapter 5. Conclusion 69
Bibliography 71
Table 3.1. Quantitative comparison of IR models in VGGFace2 and CelebA datasets 42
Table 3.2. Quantitative comparison between student and student w/o KD net-work in VGGFace2 and CelebA datasets 46
Table 3.3. The inference time and memory usage comparison between teacher and student network 46
Table 4.1. Quantitative comparative analysis with NR-IQA metrics on FFHQ, CelebA-HQ, and IWF datasets. 57
Table 4.2. Quantitative comparative analysis with FR-IQA metrics on FFHQ and CelebA-HQ. Note that our metric does not require reference image for... 58
Table 4.3. Ablation study of IFQA framework on FFHQ, CelebA-HQ, and IWF. We report the average correlation for the entire datasets. The same trainable... 62
Table 4.4. Performance comparison with respect to generator models on FFHQ, CelebA-HQ, and IWF. 62
Table 4.5. Performance comparison with respect to the backbone of discriminators on FFHQ, CelebA-HQ, and IWF. 63
Figure 1.1. Examples of face image restoration (FIR) from real-world face images. FIR aims to restore high-quality (HQ) face images from the degraded... 15
Figure 1.2. Examples of facial geometry/semantic prior information. 16
Figure 1.3. Comparison of landmark estimation results between high-quality (HQ) images (GT) and low-quality (LQ) images. (a) GT image, (b) landmark... 17
Figure 1.4. Which of 'Image A' or 'Image B' is closer to the given reference image or looks high-quality? General full-reference metrics (e.g., PSNR/SSIM)... 20
Figure 1.5. Supervision for IFQA metric. Regions from high-quality images provide 'real' labels (yellow) while regions from low-quality or restored face images... 21
Figure 3.1. Overview of our proposed adversarial feature map distillation method 36
Figure 3.2. Qualitative comparison of IR models in VGGFace2. 43
Figure 3.3. Qualitative comparison of IR models in CelebA. 44
Figure 3.4. Qualitative comparison between student and student without KD network in VGGFace2 and CelebA. 47
Figure 4.1. Comparison of PSNR/SSIM and human assessment on restored face images. PSNR/SSIM provides higher scores to 'Image A' than 'Image B' while... 49
Figure 4.2. IFQA framework consists of a generator that mimics the conventional FIR models and a discriminator that assesses the quality of the given images. We introduce facial primary region swap (FPRS) into discriminator... 50
Figure 4.3. An example of our survey questions. The survey asks participants to rank in order of the most realistic face images from the given samples. 56
Figure 4.4. Scatter plots of quality score of each IQA methods with subject score. 65
Figure 4.5. Comparison of the proposed metric with traditional PSNR/SSIM with respect to pixel-level score. 66
Figure 4.6. IFQA results in more challenging scenarios. 67
Figure 4.7. IFQA in face manipulation scenarios using StarGANv2. 67
Figure 4.8. Interpretable visualization of the proposed metric on various types of LQ images, HQ images (i.e., reference) as well as restored images from general IR or FIR models. The first and second rows show images from... 68