Audio-visual Data Fusion for Tracking the Direction of Multple Speakers
- Audio-visual Data Fusion for Tracking the Direction of Multple Speakers
- 뉴옌반쾅; 최종석
- audio-visual data fusion; sound source localization; particle filter; speaker tracking
- Issue Date
- International Conference on Control, Automation and Systems
- , 1626-1630
- This paper presents a multi-speakers tracking algorithm using audio-visual data fusion. The audio
information is the direction of speakers and the visual information is the direction of detected faces. These observations
are used as inputs of the tracking algorithm, which employed the framework of particle filter. For multi-target tracking,
we present a flexible data association and data fusion, which can deal with the appearance or absent of any information
during tracking process. The experimental results on data collected from a robot platform in a conventional office room
confirm a potential application for human-robot interaction.
- Appears in Collections:
- KIST Publication > Conference Paper
- Files in This Item:
There are no files associated with this item.
- RIS (EndNote)
- XLS (Excel)
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.