Multi Cameras Based Indoors Human Action Recognition Using Fuzzy Rules
Stephen Karungaru, Masayuki Daikoku, Kenji Terada
Abstract
In this paper, the recognition of human actions in an indoor work environment using multi cameras is proposed. HOG features learned using AdaBoost and optimized by background differencing are used to detect people, while the overlapping camera views are merged using perspective transformation. Initially, to recognize a stationary or mobile person, the distance between the detected area in successive frames is used. The direction the person is facing is estimated using the width of the detected region. Several fuzzy rules are then applied to recognize the human actions based on the person's height measured from the direction they are facing. In addition, suspicious action is recognized using the presence of an abandoned object and detection duration. In experiments, recognition accuracies achieved for ''walking'', ''stop'', ''running'', ''sitting'', ''desk working'', and ''falling'' actions are 87%, 92%, 46%, 95%, 89%, and 91% respectively.