Auto-generating Virtual Human Behavior by Understanding User Contexts

Authors
Kim, HanseobAli, GhazanfarHwang, Jae-InKim, Gerard J.Kim, Seungwon
Issue Date
2021-03
Publisher
IEEE
Citation
28th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), pp.591 - 592
Abstract
Virtual humans are most natural and effective when it can act out and animate verbal/gestural actions. One popular method to realize this is to infer the actions from predefined phrases. This research aims to provide a more flexible method to activate various behaviors straight from natural conversations. Our approach uses BERT as the backbone for natural language understanding and, on top of it, a jointly learned sentence classifier (SC) and entity classifier (EC). The SC classifies the input into conversation or action, and EC extracts the entities for the action. The pilot study has shown promising results with high perceived naturalness and positive experiences.
URI
https://pubs.kist.re.kr/handle/201004/113576
DOI
10.1109/VRW52623.2021.00178
Appears in Collections:
KIST Conference Paper > 2021
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE