Generation of co-speech gestures of robot based on morphemic analysis

Authors
Chae, Yu-JungNam, ChangjooYang, DaseulSin, HunSeob김창환박성기
Issue Date
2022-09
Publisher
Elsevier BV
Citation
Robotics and Autonomous Systems, v.155
Abstract
We propose a methodology for a robot to automatically generate felicitous co-speech gestures corresponding to robot utterances. First, the proposed method determines the part of a given robot utterance, where the robot makes a gesture by doing a morphemic analysis on the sentence of utterance. The part is herein called an expression unit. The method then predicts a gesture type to characterize the expression unit in the sense of conveying thoughts and feelings. The gesture type is selected from the four types of iconic, metaphoric, beat, and deictic categorized by McNeill by performing morphemic analysis on the sentence. A gesture proper to the gesture type is retrieved from a database of motion primitives that are built with predefined a limited number of words. For retrieving, Word2Vec is applied to estimate word similarity between the predefined words in the database and words in the expression unit such that the method can deal with an arbitrary sentence and generate an appropriate gesture for similar words in meaning.
Keywords
Human-robot interaction; Co-speech gesture generation; Machine learning; Morphemic analysis; Word embedding
ISSN
0921-8890
URI

Go to Link
DOI
10.1016/j.robot.2022.104154
Appears in Collections:
KIST Article > 2022
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE