SurveyTalk: Voice-to-UI Mapping for In-Situ ESM via LLMs

Authors
Lee, HansooKim, EunseoLee, JeongminJoo, HyunsooKwak, So na
Issue Date
2025-09-27
Publisher
ACM
Citation
UIST '25: The 38th Annual ACM Symposium on User Interface Software and Technology, pp.1 - 3
Abstract
The Experience Sampling Method (ESM) allows researchers to collect ecologically valid data on users’ daily experiences. However, conventional mobile ESM systems rely on visual interfaces (e.g., checkboxes, text fields), which can burden users in mobile or hands-free contexts. Voice-based alternatives address this but are often limited by rigid, rule-based interactions. We present SurveyTalk, a multimodal ESM system powered by large language models (LLMs) that enables real-time semantic mapping between speech and visual survey interfaces. The system supports voice-driven selection, clarification, navigation, and response interpretation, reducing user friction and improving data quality. Our approach enhances the accessibility, adaptability, and robustness of ESM in dynamic real-world settings.
URI
https://pubs.kist.re.kr/handle/201004/153789
DOI
10.1145/3746058.3758377
Appears in Collections:
KIST Conference Paper > 2025
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE