Automatic text-to-gesture rule generation for embodied conversational agents

Authors
Ali, GhazanfarLee, MyunghoHwang, Jae-In
Issue Date
2020-07
Publisher
WILEY
Citation
COMPUTER ANIMATION AND VIRTUAL WORLDS, v.31, no.4-5
Abstract
Interactions with embodied conversational agents can be enhanced using human-like co-speech gestures. Traditionally, rule-based co-speech gesture mapping has been utilized for this purpose. However, the creation of this mapping is laborious and often requires human experts. Moreover, human-created mapping tends to be limited, therefore prone to generate repeated gestures. In this article, we present an approach to automate the generation of rule-based co-speech gesture mapping from publicly available large video data set without the intervention of human experts. At run-time, word embedding is utilized for rule searching to get the semantic-aware, meaningful, and accurate rule. The evaluation indicated that our method achieved comparable performance with the manual map generated by human experts, with a more variety of gestures activated. Moreover, synergy effects were observed in users' perception of generated co-speech gestures when combined with the manual map.
Keywords
computer animation; gesture generation; rule-based mapping; social agents; virtual agents
ISSN
1546-4261
URI
https://pubs.kist.re.kr/handle/201004/118429
DOI
10.1002/cav.1944
Appears in Collections:
KIST Article > 2020
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE