Analyzing Human Activities and Transferring Semantic Representations to Humanoid Automated Machines
Abstract
A semantic representation framework is proposed here to conclude human activity from annotations. By utilizing elaboration of a framework, I am able to transfer tasks and skills from humanoids to automated machines. Our method allows to understand a demonstrator's behavior at a higher level of understanding through semantic representations. To achieve the required task, the demonstrators should carry out their actions in accordance with the abstracted essence of the activity derived from observations. Consequently, the motion and properties of humans and objects are combined to produce a meaningful semantic description. Moreover, three contrasting approaches were used to substantiate the semantic rules and complex kitchen activities, i.e., 1) Preparing pancakes, 2) Preparing sandwiches, and 3) setting the table, aiming determine the maintaining semantic consistency. Our study presents measurable and detailed outcomes, that show that I system can handle time constraints, a variety of execution styles by different participants when performing the same task, and various labeling strategies when executing the same task without any further training. The inferences based on the representations of one scenario, which have been obtained based upon one situation, are still effective for innovative circumstances as well, which further proves that there is no dependence between a given task and the representations derived therefrom. As a result of I experiment, I was able to successfully recognize human actions in real-time on about 87.49% of events, that was well than the accuracy from a arbitrary member diagnosing similar performances on about 77.08% of the occasions.