Title Page
Contents
Chapter 1. Introduction 11
Chapter 2. Preliminaries 15
2.1. Generative Adversarial Network 15
2.2. Revisit the Fully Connected Layer 16
Chapter 3. Proposed Method 18
3.1. Cascading Rejection Module 18
3.2. Understanding and Analysis 21
3.3. Conditional Cascading Rejection module 24
Chapter 4. Experiments 27
4.1. Implementation Details 27
4.2. Evaluation Models 28
4.3. Evaluation Metric 30
4.4. Image Generation 33
4.5. High-resolution Image Generation 40
Chapter 5. Conclusion 42
References 43
Bibliography 44
Abbreviations 50
Abstract 51
초록 53
CURRICULUM VITAE 54
Table 4.1. Detailed architectures of generator according to the image resolution. BN indicates a batch normalization layer. 29
Table 4.2. Detailed architectures of discriminator according to the image resolution. 31
Table 4.3. Comparison of the proposed method with the traditional GAN on the CIFAR-10, CelebA, and LSUN datasets in terms of FID. 34
Table 4.4. Comparison of the proposed method with the traditional GAN on the CelebA and LSUN datasets in terms of FID. The bold numbers indicate the best... 35
Table 4.5. Comparison of the proposed method with the traditional GAN and Projection cGAN on the tiny-imageNet dataset in terms of FID. The bold numbers... 37
Table 4.6. Comparison of the proposed method with the traditional GAN on the CelebA-HQ dataset in terms of FID. The bold numbers indicate the best perfor-... 40
Figure 2.1. Example of the inner product. In the inner product process, the feature space which is perpendicular to w is ignored. 17
Figure 3.1. The illustration of the CR module. In the CR module, N scalar values are obtained through iterative vector rejection and inner product processes. 20
Figure 3.2. The illustration of the experimental results on eight 2D Gaussian mixture models. I trained the networks two times to reveal the true trend of the CR module. 23
Figure 3.3. The illustration of the conditional projection discriminator. 24
Figure 3.4. The illustration of the cCR module. In this dissertation, I propose the cCR based on the conditional projection discriminator in [24]. 25
Figure 4.1. Detailed architectures of the ResBlock used in my experiments. (a) ResBlock of the discriminator, (b) ResBlock of the generator. 32
Figure 4.2. The FID score over the training iteration on the tiny-ImageNet dataset. The blue and red lines indicate the traditional GAN and the proposed method, respectively. 36
Figure 4.3. The FID score over the training iteration on the tiny-ImageNet dataset. The blue and red lines indicate the projection cGAN and the proposed method, respectively. 38
Figure 4.4. Examples of the generated images on the CelebA, LSUN, and tiny-ImageNet datasets. (a) Generated images on the CelebA dataset, (b) Generated... 39
Figure 4.5. Examples of the generated images with 512 x 512 and 256 x 256 resolutions on the Celeb-HQ dataset. 41