Title Page
Abstract
Contents
Chapter 1. Introduction 12
1.1. Motivation 13
1.2. Thesis Contribution 15
1.3. Thesis Organization 15
Chapter 2. Human Motion Estimation using Inertial Navigation Algorithms 17
2.1. Standard Inertial Navigation Algorithm using Indirect Kalman Filter 20
2.2. Measurement updates for Indirect Kalman Filter 22
2.3. Optimization-based Smoother for Inertial Navigation Algorithm 25
2.4. Chapter Summary 26
Chapter 3. Visual Inertial Odometry for Gait analaysis Appication 27
3.1. System Overview 29
3.1.1. Hardware setup 29
3.1.2. Notation 29
3.2. Visual inertial odometry filtering 30
3.2.1. System equation and state definition 30
3.2.2. Visual odometry updating 34
3.3. Floor and foot detection 37
3.3.1. Floor plane detection 37
3.3.2. Foot detection and ellipse approximation 38
3.4. Measurement equations 39
3.4.1. Measurement equation for markers on the floor 40
3.4.2. Measurement equation for the stance foot 42
3.5. Filtering and smoothing algorithm 45
3.6. Chapter Summary 47
Chapter 4. Deep Learning-based Human motion estimation 48
4.1. Foot detection framework 49
4.1.1. FCN for Semantic Segmentation 51
4.1.2. Custom Dataset 52
4.1.3. Foot detection model training 53
4.1.4. Foot position estimation 54
4.2. Human lower-body motion prediction framework 55
4.2.1. GCN for human motion prediction 56
4.2.2. Dataset: Lower body pose in walking action 58
4.2.3. Model training 58
4.2.4. Pose prediction 59
4.3. Chapter Summary 59
Chapter 5. Experiments and Results 61
5.1. Experiments 62
5.2. Results 63
5.2.1. Visual Inertial Odometry results 63
5.2.2. Deep learning-based foot trajectory estimation results 66
5.2.3. Human lowerlimb motion estimation results 70
Chapter 6. Conclusions and Future works 72
6.1. Conclusions 73
6.2. Future works 74
Publications 76
References 77
Table 5.1. Five Subjects Information 63
Table 5.2. Estimated stride length error. 66
Table 5.3. Estimated dual foot trajectories error 69
FIGURE 3.1. Algorithm structure. 28
FIGURE 3.2. System overview. 29
FIGURE 3.3. Relative camera pose of two consecutive image frames 34
FIGURE 3.4. Stance and swing period 37
FIGURE 3.5. Example of stance foot set Sleft,l[이미지참조] 39
FIGURE 3.6. Markers in the starting and final points 40
FIGURE 4.1. An illustration of the progression from coarse to fine inference. 49
FIGURE 4.2. An example of semantic segmentation from Pascal VOC 2010. 50
FIGURE 4.3. An example of visualizing the groundtruth data in our custom dataset. 50
FIGURE 4.4. FCN structure. 51
FIGURE 4.5. FCN strides prediction. 52
FIGURE 4.6. An example of our custom dataset, including RGB, HHA, and groundtruth images using Labelme tool. 53
FIGURE 4.7. Keypoints on a detected foot. 54
FIGURE 4.8. Implemented network architecture with GCN. 58
FIGURE 5.1. Participant with equipped camera system doing the experiments. 62
FIGURE 5.2. Experiment setup, volunteer walks three straight paths from green triangle to red square through the working range of the optical tracker system. 62
FIGURE 5.3. Detected stance foot from point cloud data and masked in RGB image: 1) initial states; 2) first detected right foot; 3) already detected right... 63
FIGURE 5.4. Estimated walking trajectory using integrating internal IMU data only, proposed filter and smoother. 64
FIGURE 5.5. xy-plane estimated walking trajectory and foot position using proposed Kalman filter and smoother. 64
FIGURE 5.6. 3-axis estimated walking trajectory and foot position using proposed Kalman filter and smoother (IC: Initial Contact, TO: Toe Off). 65
FIGURE 5.7. Compare estimated foot position with ground truth. 65
FIGURE 5.8. FCN evaluation results: 1) Pixel accuracy; 2: Mean IoU; 3: Frequency weighted average accuracy. 67
FIGURE 5.9. Deep learning-based foot detection results: prediction: 1) predicted classes from network; post-processing: 2) filted blobs and smooth with... 67
FIGURE 5.10. Three axis estimated trajectory of: 1) camera; 2) left and right feet; 3) 3d position of both camera and two foot. 68
FIGURE 5.11. 3d position of estimated foot trajectory and ground truth. 69
FIGURE 5.12. 3d position of GCN-based predicted left and right foot. 70
FIGURE 5.13. 3d position of GCN-based predicted and VIO-based estimated left foot trajectory. 71
FIGURE 5.14. 3d position of GCN-based predicted and groundtruth of dual foot trajectory. 71