Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation
- Authors
- Tserendorj Adiya; Jae Shin Yoon; Lee JeongEun; Kim Sang Hun; Hwasup Lim
- Issue Date
- 2024-05-09
- Publisher
- International Conference on Learning Representations (ICLR)
- Citation
- International Conference on Learning Representations (ICLR)
- Abstract
- We introduce a method to generate temporally coherent human animation from a single image, a video, or a random noise.This problem has been formulated as modeling of an auto-regressive generation, i.e., to regress past frames to decode future frames.However, such unidirectional generation is highly prone to motion drifting over time, generating unrealistic human animation with significant artifacts such as appearance distortion. We claim that bidirectional temporal modeling enforces temporal coherence on a generative network by largely suppressing the appearance ambiguity.To prove our claim, we design a novel human animation framework using a denoising diffusion model: a neural network learns to generate the image of a person by denoising temporal Gaussian noises whose intermediate results are cross-conditioned bidirectionally between consecutive frames. In the experiments, our method demonstrates strong performance compared to existing unidirectional approaches with realistic temporal coherence.
- URI
Go to Link
- Appears in Collections:
- KIST Conference Paper > 2024
- Files in This Item:
There are no files associated with this item.
- Export
- RIS (EndNote)
- XLS (Excel)
- XML
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.