Taking human factors into account, a visual servoing approach aims to facilitate robots with real-time situational information to accomplish tasks in direct and proximate collaboration with people. A hybrid visual servoing algorithm, a combination of the classical position-based and image-based visual servoing, is applied to the whole task space. A model-based tracker monitors the human activities, via matching the human skeleton representation and the image of people in image. Grasping algorithms are implemented to compute grasp points based on the geometrical model of the robot gripper. Whilst major challenges of human-robot interactive object transfer are visual occlusions and making grasping plans, this work proposes a new method of visually guiding a robot with the presence of partial visual occlusion, and elaborate the solution to adaptive robotic grasping.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.