Object Modeling for Environment Perception through Human-Robot Interaction

Authors
Kim, SoohwanKim, Dong HwanPark, Sung-Kee
Issue Date
2010
Publisher
IEEE
Citation
International Conference on Control, Automation and Systems (ICCAS 2010), pp.2328 - 2333
Abstract
In this paper we propose a new method of object modeling for environment perception through human-robot interaction. Particularly, within a multi-modal object modeling architecture, we tackle the gestural language part using a stereo camera. To do that, we define three human gestures based on the size of target objects; holding small objects, pointing at medium ones, and contacting two corner points of large ones. When a user indicates where the target object is located in the environment, the robot interprets the user's gesture and captures one or more images including the target objects. The region of interest where a target object is likely to be located in the captured image is estimated from the environmental context and the user's gesture. Finally, given an image with a region of interest, the robot performs foreground/background segmentation automatically. Here, we suggest a marker-based watershed segmentation method for object segmentation. Experimental results show that the segmentation quality of our method is as good as that of the GrabCut algorithm, but the computational time of ours is so much faster that it is appropriate for on-line interactive object modeling.
URI
https://pubs.kist.re.kr/handle/201004/115747
Appears in Collections:
KIST Conference Paper > 2010
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE