Title Page
VITA
ABSTRACT
Contents
Nomenclature 20
Chapter 1. Introduction 23
1.1. Background Information 25
1.1.1. Fault types 25
1.1.2. Bearing Fault Frequencies 26
1.1.3. Condition Monitoring Methodologies 28
1.2. Motivation 30
1.3. Thesis Objectives and Contributions 32
1.4. Thesis Outline 33
Part Ⅰ. Data Acquisition 35
Chapter 2. Dataset Description 36
2.1. The Current Signal data acquisition 36
Part Ⅱ. Fault Diagnosis of Rotating Machines Based on Advanced Signal Processing Methods and Feature Extraction 40
Chapter 3. Bearing Fault Diagnosis of Induction Motors Using a Genetic Algorithm and Machine Learning Classifiers 41
3.1. Introduction 41
3.2. Technical Background 45
3.3. Methodology 50
3.4. Experimental results 52
3.5. Conclusions 56
Chapter 4. Bearing Fault Classification of Induction Motors Using Discrete Wavelet Transform and Ensemble Machine Learning Algorithms 58
4.1. Introduction 58
4.2. Technical Background 62
4.3. Methodology 71
4.4. Experimental Results 72
4.5. Conclusions 75
Part Ⅲ. Fault Diagnosis of Rotating Machines Based on Data Driven Artificial Intelligence Techniques 77
Chapter 5. A Deep Autoencoder-Based Convolution Neural Network Framework for Bearing Fault Classification in Induction Motors 78
5.1. Introduction 78
5.2. Technical Background 83
5.3. Methodology 89
5.4. Experimental Results 90
5.5. Conclusions 95
Chapter 6. A Bearing Fault Classification Framework Based on Image Encoding Techniques and a Convolutional Neural Network Under Different Operating Conditions 97
6.1. Introduction 97
6.2. Technical Background 99
6.3. Methodology 104
6.4. Experimental Results 106
6.5. Conclusions 112
Part Ⅳ. Summary and Future Work 114
Chapter 7. Summary of Contributions and Future Work 115
7.1. Summary of Contributions 115
7.2. Future work 117
Publications 119
References 121
Table 2.1. Operation parameters 39
Table 2.2. Characterization of considered bearings. 39
Table 3.1. Extracted statistical features from time domains for the feature matrix (x is the current signal). 45
Table 3.2. Performance evaluation parameters. 50
Table 3.3. The genetic algorithm parameters settings for feature selection. 53
Table 3.4. The results of three classifiers in terms of six evaluation parameters. 55
Table 3.5. Comparing evaluation parameters of one-phase and two-phase current signals. 55
Table 3.6. Accuracy comparison among the different methods. 56
Table 4.1. The formulae used for feature extraction from the current signal (x represents the signal vector). 67
Table 4.2. Optimum parameters. 71
Table 4.3. The five evaluation parameters of two classifiers for three wavelets are executed on the raw signal. 72
Table 4.4. The five evaluation parameters of two classifiers for three wavelets are executed on the filtered signal. 73
Table 4.5. Comparison of classification accuracy among the various methods. 75
Table 5.1. The structure of the designed deep autoencoder (DAE). 85
Table 5.2. The sequential model of a 2-layer CNN. 89
Table 5.3. List of extracting statistical features from the residual signal. 93
Table 5.4. The results of the evaluation parameters of five different approaches. 93
Table 5.5. Comparison of accuracy metrics with existing works. 95
Table 6.1. Layer-wise details of the deep CNN 104
Table 6.2. The splitting ratio of the dataset into training, validation, and test sets 107
Table 6.3. The performance measurement of the designed CNN architecture. 107
Table 6.4. Accuracy value for the different train-test ratio 109
Table 6.5. The resultant evaluation matrices for three different approaches 111
Table 6.6. The classification results of some existing works 112
Figure 1.1. Structure of a rolling element bearing 27
Figure 1.2. Classification of Fault Diagnosis Methods 28
Figure 2.1. Schematic of the experimental testbed. 36
Figure 2.2. Artificially induced bearing faults: (a) EDM trench, (b) drilled hole, (c) electrically engraved pitting 38
Figure 3.1. A flow chart of genetic algorithm optimization. 47
Figure 3.2. KNN algorithm for a situation with two classes and two features. 48
Figure 3.3. (a) Decision tree, (b) random forest architecture. 49
Figure 3.4. Schematic diagram of the proposed methodology. 51
Figure 3.5. Effect of varying (a) the population size and (b) crossover probability. 52
Figure 3.6. Effect of varying (a) the mutation probability for estimating loss and (b) generations for fitness value. 53
Figure 3.7. Confusion matrix for (a) KNN, (b) decision tree, and (c) random forest classifiers. 54
Figure 3.8. Combined ROC curve for the KNN, decision tree, and random forest classifiers. 54
Figure 4.1. Characteristic signals of three mother wavelets. 64
Figure 4.2. (a) Raw and (b) filtered signals of three types of bearing states. 65
Figure 4.3. Wavelet decomposition of the 'Haar' mother wavelet of a healthy bearing. 66
Figure 4.4. Wavelet decomposition of the 'db4' mother wavelet of the outer fault bearing. 66
Figure 4.5. Wavelet decomposition of the 'sym4' mother wavelet of the inner fault bearing. 67
Figure 4.6. (a) Random forest and (b) XGBoost algorithm architectures. 70
Figure 4.7. Workflow of the proposed method for bearing fault classification of IM. 72
Figure 4.8. Accuracy for RF and XGB for raw and filtered motor current signals. 73
Figure 4.9. Confusion matrix of the db4 wavelet for (a) RF and (b) XGBoost classifiers. 74
Figure 4.10. Confusion matrix of the sym4 wavelet for (a) RF and (b) XGBoost classifiers. 74
Figure 4.11. Confusion matrix of the Haar wavelet for (a) RF and (b) XGBoost classifiers. 74
Figure 4.12. ROC curves for the RF and XGBoost classifiers. 75
Figure 5.1. The basic architecture of the autoencoder. 84
Figure 5.2. The architecture of the designed CNN. 89
Figure 5.3. The designed framework of a DAE-CNN-based fault classification model. 90
Figure 5.4. The raw, predicted, and residual signal of bearings corresponding to (a) normal, (b) outer race fault, (c) inner race fault conditions, and (d) residual values for 100 samples of three... 92
Figure 5.5. (a) The boxplot represents the accuracy matrix of over 100 experiments (b) confusion matrix. 94
Figure 6.1. Data segmentation for converting time-series data into an image 99
Figure 6.2. Steps of GAF (a) Normalized time-series signal; (b) Converted signal in polar coordinates; (c) GASF; and (d) GADF. 101
Figure 6.3. The modified architecture of the Deep Convolution Neural Network 103
Figure 6.4. The workflow of the proposed method. 105
Figure 6.5. The resultant 2-D images after applying GASF and GADF algorithms on four working conditions. 106
Figure 6.6. The performance of the three existing Models (a) Accuracy and (b) Precision, recall, and F1_score for the four conditioned GADF encoded images. 108
Figure 6.7. Accuracy and loss curve of the deep CNN model. 109
Figure 6.8. Feature visualization via t‐SNE: (a) input image, (b) initial convolution layer, (c) final convolution layer, and (d) Output layer. 111
Figure 6.9. The confusion matrix of (a) Original + 1-D CNN, (b) CWT + 2-D CNN, and (c) GAF + 2-D CNN for a single condition data, (d) GAF + 2-D CNN for the complete dataset. 111