Models of Trust in Human Control of Swarms with Varied Levels of Autonomy

Models of Trust in Human Control of Swarms with Varied Levels of Autonomy
남창주Phillip WalkerHuao LiMichael LewisKatia Sycara
Human-robot interaction; Human-swarm interaction; Multirobot systems; Swarm robotics; Trust
Issue Date
IEEE Transactions on Human-Machine Systems
VOL 50, NO 3-204
In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a human operator and an autonomous search algorithm collaboratively, the swarms in the manual and autonomous LOAs are fully directed by the human and the search algorithm, respectively. From user studies, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since the task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes the factors affecting trust. We develop variations of the trust model for different LOAs. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from demonstrations where the learned behaviors are used to predict human trust. Compared to an existing model, our models reduce the prediction error by at most 39.6%, 36.5%, and 28.8% in the manual, MI, and auto-LOA, respectively.
Appears in Collections:
KIST Publication > Article
Files in This Item:
There are no files associated with this item.
RIS (EndNote)
XLS (Excel)


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.