I am a senior researcher and lead the projects of neural rendering and 2D avatar in Tencent AI Lab. I recieved Ph.D. degree from School of Computer Science and Technology, Xi’an Jiaotong University in 2019 under the supervision of Prof. Fei Wang and Prof. Jizhong Zhao. I was a visiting student at NICTA in 2015 supervised by Dr. Mathieu Salzmann. I received Master degree from School of Software Engineering, Xi’an Jiaotong University in 2010, and received Bachelor degree from Department of Computer Science and Technology, Xi’an University of Science and Technology in 2007. My research interests include non-rigid 3d reconstruction, performance capture, neural rendering, image synthesis and relevant applications. At present, we attempt to create the highly photorealistic and fully controllable digital content including human avatar and scenarios.

πŸ‘©β€πŸŽ“πŸ§‘β€πŸŽ“ Internship at Tencent AI Lab. I am looking for the both research and engineering interns to work on neural rendering (e.g. NeRF), image synthesis and digital avatars. Feel free to contact me!

πŸ”ˆ Positions at Xi’an Jiaotong University. Assoc. Prof. Yu Guo, one of my co-authors, is looking for PH.D students, master students, resreach assistants and engineers. Please visit his personal homepage to get more details.

If you like the template of this homepage, welcome to star and fork my open-sourced template version AcadHomepage .

πŸ”₯ News

  • 2022.08: Β πŸŽ‰πŸŽ‰ 3 paper accepted to SIGGRAPH Asia 2022
  • 2022.07: Β πŸŽ‰πŸŽ‰ 1 paper accepted to ECCV 2022
  • 2022.03: Β πŸŽ‰πŸŽ‰ 1 paper accepted to IEEE TPAMI
  • 2022.03: Β πŸŽ‰πŸŽ‰ 4 papers accepted to CVPR 2022

πŸ“ Publications

Equal contribution$^\star$ Corresponding author$^\dagger$

Conference papers

SIGGRAPH Asia 2022 (ToG)
sym

IDE-3D: Interactive Disentangled Editing for High-Resolution 3D-aware Portrait Synthesis

Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, Yebin Liu

Project

  • we propose the IDE-3D, a locally disentangled, semantics-aware 3D face generator which supports interactive 3D face synthesis and local editing. Our method supports various free-view portrait editing tasks with the state-of-the-art performance in photorealism and efficiency.
SIGGRAPH Asia 2022 (ToG)
sym

Neural Parameterization for Dynamic Human Head Editing

Li Ma, Xiaoyu Li, Jing Liao, Xuan Wang, Qi Zhang, Jue Wang, Pedro Sander

Project

  • Neural Parameterization (NeP), a hybrid representation that provides the advantages of both implicit and explicit methods.
SIGGRAPH Asia 2022 (Conf)
sym

VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild

Kun Cheng, Xiaodong Cun, Yong Zhang, Menghan Xia, Fei Yin, Mingrui Zhu, Xuan Wang, Jue Wang, Nannan Wang

Project

  • VideoReTalking, a new system to edit the faces of a real-world talking head video according to an input audio, producing a high-quality and lip-syncing output video even with a different emotion.
ECCV 2022
sym

StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN

Fei Yin, Yong Zhang, Xiaodong Cun, Mingdeng Cao, Yanbo Fan, Xuan Wang, Qingyan Bai, Baoyuan Wu, Jue Wang, Yujiu Yang

Project

  • We propose a novel unified framework based on a pre-trained StyleGAN that enables a set of powerful functionalities, i.e., high-resolution video generation, disentangled control by driving video or audio, and flexible face editing.
CVPR 2022
sym

FENeRF: Face Editing in Neural Radiance Fields

Jingxiang Sun, Xuan Wang$^\dagger$, Yong Zhang, Xiaoyu Li, Qi Zhang, Yebin Liu, Jue Wang

Project

  • The first portrait image generator that is locally editable and strictly view-consistent.
CVPR 2022
sym

HDR-NeRF: High Dynamic Range Neural Radiance Fields

Xin Huang, Qi Zhang, Ying Feng, Hongdong Li, Xuan Wang, Qing Wang

Project

  • High Dynamic Range Neural Radiance Fields (HDR-NeRF) to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures.
CVPR 2022
sym

Hallucinated Neural Radiance Fields in the Wild

Xingyu Chen, Qi Zhang, Xiaoyu Li, Yue Chen, Feng Ying, Xuan Wang, Jue Wang

Project

  • An appearance hallucination module to handle time-varying appearances and transfer them to novel views.
CVPR 2022
sym

Deblur-NeRF: Neural Radiance Fields from Blurry Images

Li Ma, Xiaoyu Li, Jing Liao, Qi Zhang, Xuan Wang, Jue Wang, Pedro V Sander

Project

  • The first method that can recover a sharp NeRF from blurry input.
ICCV 2019
sym

On Boosting Single-Frame 3D Human Pose Estimation via Monocular Videos

Zhi Li$^\star$, Xuan Wang$^\star$, Fei Wang, Peilin Jiang

  • The method that exploits monocular videos to complement the training dataset for the singleimage 3D human pose estimation tasks.
ECCV 2016
sym

Template-free 3D Reconstruction of Poorly-textured Nonrigid Surfaces

Xuan Wang, Mathieu Salzmann, Fei Wang, Jizhong Zhao

Project

  • A template-free approach to reconstructing a poorly-textured, deformable surface.

Journal papers

TPAMI 2022
sym

Robust Pose Transfer with Dynamic Details using Neural Video Rendering

Yang-tian Sun, Hao-zhi Huang, Xuan Wang, Yu-kun Lai, Wei Liu, Lin Gao

  • A neural video rendering framework coupled with an image-translation-based dynamic details generation network (D2G-Net), which fully utilizes both the stability of explicit 3D features and the capacity of learning components.
TIP 2021
sym

UniFaceGAN: A Unified Framework for Temporally Consistent Facial Video Editing

Meng Cao, Haozhi Huang, Hao Wang, Xuan Wang, Li Shen, Sheng Wang, Linchao Bao, Zhifeng Li, Jiebo Luo

  • A unified temporally consistent facial video editing framework termed UniFaceGAN.