Title Page
Abstract
국문 초록
Preface
Contents
Chapter 1. Introduction 15
1.1. Time-Series Anomaly Detection 15
1.2. Time-Series Representation Learning 17
1.3. Structure of Dissertation 19
Chapter 2. Time-Series Anomaly Detection Based on Multi-Resolution Time-Series Representations 21
2.1. Background 23
2.2. Methods 25
2.2.1. Model Architecture 25
2.2.2. Objective Function 33
2.2.3. Anomaly Detection 36
2.3. Experiments 37
2.3.1. Experimental Settings 37
2.3.2. Experimental Results 41
2.4. Summary 48
Chapter 3. Learning Time-Series Representations Specific to Time-Series Anomaly Detection 49
3.1. Background 51
3.2. Methods 54
3.2.1. Time-Series Data Augmentation 54
3.2.2. Model Architecture 55
3.2.3. Objective Function 57
3.3. Experiments 58
3.3.1. Experimental Settings 58
3.3.2. Experimental Results 59
3.4. Summary 62
Chapter 4. Learning General Time-Series Representations for Diverse Time-Series Analysis 64
4.1. Background 67
4.2. Methods 70
4.2.1. Model Architecture 70
4.2.2. Self-Supervised Tasks 72
4.2.3. Objective Function 80
4.3. Experiments 81
4.3.1. Experimental Settings 81
4.3.2. Experimental Results 85
4.4. Summary 95
Chapter 5. Conclusion 97
Reference 101
Table 1.1. The main structure of this dissertation 19
Table 2.1. Dataset description 38
Table 2.2. Overall performance comparison 42
Table 2.3. Effects of model components 45
Table 2.4. Effects of τ 46
Table 3.1. Overall performance comparison 60
Table 3.2. Effects of DeepSVDD loss 60
Table 4.1. Experimental result on time-series classification 84
Table 4.1. Table Continued: Experimental result on time-series classification 85
Table 4.2. Experimental result on time-series forecasting 89
Table 4.3. Experimental results on time-series anomaly detection 91
Table 4.4. Effects of model components 92
Table 4.5. Experimental results on cross-domains transfer learning 95
Figure 2.1. Overall architecture of the proposed method 27
Figure 2.2. Comparison of input time series and reconstructed output from RAE-MEPC 44
Figure 2.3. Effects of λpred[이미지참조] 47
Figure 3.1. Overall architecture of the proposed method 53
Figure 3.2. Visualization of learned representations in 2D-gesture dataset 61
Figure 4.1. Feature space learned as per each consistency 65
Figure 4.2. Overall architecture of the proposed method 71
Figure 4.3. Contrastive learning for contextual consistency 73
Figure 4.4. Contrastive learning for temporal consistency 76
Figure 4.5. Contrastive learning for transformation consistency 79
Figure 4.6. Visualization of learned representations in BasicMotions and RacketSports datasets 87
Figure 4.6. Figure Continued : Visualization of learned representations in BasicMotions and RacketSports datasets 88
Figure 4.7. Visualization of learned representations in HAR dataset 93