High-Fidelity Clothed Avatar Reconstruction from a Single Image

Tingting Liao1,2
Xiaomei Zhang1,2
Yuliang Xiu3
Hongwei Yi3
Xudong Liu4
Guo-Jun Qi4,5
Yong Zhang6
Xuan Wang6
Xiangyu Zhu1,2
Lei Zhen1,2,7
University of Chinese Academy of Sciences, Beijing, China1
MAIS, Institute of Automation, Chinese Academy of Sciences, Beijing, China2
Max Planck Institute for Intelligent Systems, Tübingen, Germany3
OPPO Research4    Westlake University5    Tencent AI Lab6    CAIR, HKISI, CAS7

[Paper]
[Code]

Images to avatars. Given an image of a person in an unconstrained pose, our method reconstructs 3D clothed avatars in both original posed space and canonical space and can repose the human body from the canonical mesh.


This paper presents a framework for efficient 3D clothed avatar reconstruction. By combining the advantages of the high accuracy of optimization-based methods and the efficiency of learning-based methods, we propose a coarse-to-fine way to realize a high-fidelity clothed avatar reconstruction (CAR) from a single image. At the first stage, we use an implicit model to learn the general shape in the canonical space of a person in a learning-based way, and at the second stage, we refine the surface detail by estimating the non-rigid deformation in the posed space in an optimization way. A hyper-network is utilized to generate a good initialization so that the convergence of the optimization process is greatly accelerated. Extensive experiments on various datasets show that the proposed CAR successfully produces high-fidelity avatars for arbitrarily clothed humans in real scenes.


Results (Real Images)


Results (Animation)



Code


 [GitHub]


Paper

T. Liao, X. Zhang, Y. Xiu.

High-Fidelity Clothed Avatar Reconstruction from a Single Image.

In CVPR, 2023.

[ArXiv]  [Bibtex]




Acknowledgements

This webpage template was borrowed from colorful folks.