Full metadata record

DC Field Value Language
dc.contributor.authorLee, Yonghyeon-
dc.contributor.authorLee, Byeongho-
dc.contributor.authorKim, Seungyeon-
dc.contributor.authorPark, Frank C.-
dc.date.accessioned2025-07-18T03:00:27Z-
dc.date.available2025-07-18T03:00:27Z-
dc.date.created2025-07-18-
dc.date.issued2025-07-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/152762-
dc.description.abstractEffective movement primitives should be capable of encoding and generating a rich repertoire of trajectories conditioned on task-defining parameters such as vision or language inputs. While recent methods based on the motion manifold hypothesis, which assumes that a set of trajectories lies on a lower-dimensional nonlinear subspace, address challenges such as limited dataset size and the high dimensionality of trajectory data, they often struggle to capture complex task-motion dependencies, i.e., when motion distributions shift drastically with task variations. To address this, we introduce Motion Manifold Flow Primitives (MMFP), a framework that decouples the training of the motion manifold from task-conditioned distributions. Specifically, we employ flow matching models, state-of-the-art conditional deep generative models, to learn task-conditioned distributions in the latent coordinate space of the learned motion manifold. Experiments are conducted on language-guided trajectory generation tasks, where many-to-many text-motion correspondences introduce complex task-motion dependencies, highlighting MMFP's superiority over existing methods.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleMotion Manifold Flow Primitives for Task-Conditioned Trajectory Generation Under Complex Task-Motion Dependencies-
dc.typeArticle-
dc.identifier.doi10.1109/LRA.2025.3575313-
dc.description.journalClass1-
dc.identifier.bibliographicCitationIEEE Robotics and Automation Letters, v.10, no.7, pp.7412 - 7419-
dc.citation.titleIEEE Robotics and Automation Letters-
dc.citation.volume10-
dc.citation.number7-
dc.citation.startPage7412-
dc.citation.endPage7419-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid001508109100005-
dc.relation.journalWebOfScienceCategoryRobotics-
dc.relation.journalResearchAreaRobotics-
dc.type.docTypeArticle-
dc.subject.keywordAuthorTrajectory-
dc.subject.keywordAuthorManifolds-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorAutoencoders-
dc.subject.keywordAuthorManifold learning-
dc.subject.keywordAuthorVectors-
dc.subject.keywordAuthorArtificial intelligence-
dc.subject.keywordAuthorTechnological innovation-
dc.subject.keywordAuthorRobot kinematics-
dc.subject.keywordAuthorPlanning-
dc.subject.keywordAuthorImitation Learning-
dc.subject.keywordAuthorlearning from demonstration-
dc.subject.keywordAuthorrepresentation learning-
dc.subject.keywordAuthormovement primitives-
Appears in Collections:
KIST Article > Others
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE