Human localization based on the fusion of vision and sound system

Authors
Kim, S.-W.Lee, J.-Y.Kim, D.You, B.-J.Doh, N.L.
Issue Date
2011-11
Publisher
IEEE
Citation
2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence, URAI 2011, pp.495 - 498
Abstract
In this paper, a method for accurate human localization using a sequential fusion of sound and vision is proposed. Although the sound localization alone works well in most cases, there are situations such as noisy environment and small inter-microphone distance, which may produce wrong or poor results. A vision system also has deficiency, such as limited visual field. To solve these problems we propose a method that combines sound localization and vision in real time. Particularly, a robot finds rough location of the speaker via sound source localization, and then using vision to increase the accuracy of the location. Experimental results show that the proposed method is more accurate and reliable than the results of pure sound localization. ? 2011 IEEE.
ISSN
0000-0000
URI
https://pubs.kist.re.kr/handle/201004/80437
DOI
10.1109/URAI.2011.6145870
Appears in Collections:
KIST Conference Paper > 2011
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE