3-Dimensional Face from a Single Face Image with Various Expressions

Authors
Hong, Yu-JinNam, Gi PyoChoi, HeeseungCho, JunghyunKim, Ig-Jae
Issue Date
2016-07
Publisher
SPRINGER INTERNATIONAL PUBLISHING AG
Citation
4th International Conference on Distributed, Ambient and Pervasive Interactions (DAPI) held as part of 18th International Conference on Human-Computer Interaction (HCI International), pp.202 - 209
Abstract
Generating a user-specific 3D face model is useful for a variety of applications, such as facial animation, games or movie industries. Recently, there have been spectacular developments in 3D sensors, however, accurately recovering the 3D shape model from a single image is a major challenge of computer vision and graphics. In this paper, we present a method that can not only acquire a 3D shape from only a single face image but also reconstruct facial expression. To accomplish this, a 3D face database with a variety of identities and facial expressions was restructured as a data array which was decomposed for the acquisition of bilinear models. With this model, we represent facial variances as two kinds of elements: expressions and identities. Then, target face image is fitted to 3D model while estimating its expression and shape parameters. As application example, we transferred expressions to reconstructed 3D models and naturally applied new facial expressions to show the efficiency of the proposed method.
ISSN
0302-9743
URI
https://pubs.kist.re.kr/handle/201004/114945
DOI
10.1007/978-3-319-39862-4_19
Appears in Collections:
KIST Conference Paper > 2016
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE