VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior

1Nanjing University, 2Alibaba Group, 3ByteDance, 4Nankai University

VividTalk can generate realistic and lip-sync talking head videos with expressive facial expression, natural head poses.

Abstract

Audio-driven talking head generation has drawn much attention in recent years, and many efforts have been made in lip-sync, expressive facial expressions, natural head pose generation, and high video quality. However, no model has yet led or tied on all these metrics due to the one-to-many mapping between audio and motion. In this paper, we propose VividTalk, a two-stage generic framework that supports generating high-visual quality talking head videos with all the above properties. Specifically, in the first stage, we map the audio to mesh by learning two motions, including non-rigid expression motion and rigid head motion. For expression motion, both blendshape and vertex are adopted as the intermediate representation to maximize the representation ability of the model. For natural head motion, a novel learnable head pose codebook with a two-phase training mechanism is proposed. In the second stage, we proposed a dual branch motion-vae and a generator to transform the meshes into dense motion and synthesize high-quality video frame-by-frame. Extensive experiments show that the proposed VividTalk can generate high-visual quality talking head videos with lip-sync and realistic enhanced by a large margin, and outperforms previous state-of-the-art works in objective and subjective comparisons. The code will be publicly released upon publication.

Video

Talking In Different Characters

VividTalk supports animating facial images across various styles, such as human, realism, and cartoon.

Talking In Different Languages

Using VividTalk you create talking head videos according to various audio singal.

Comparisons

The comparison between VividTalk and state-of-the-art methods in terms of lip-sync, head pose naturalness, identity preservation, and video quality.

BibTeX

@article{sun2023vividtalk,
  title     = {VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior},
  author    = {Xusen Sun, Longhao Zhang, Hao Zhu, Peng Zhang, Bang Zhang, Xinya Ji, Kangneng Zhou, Daiheng Gao, Liefeng Bo, Xun Cao},
  journal   = {arXiv preprint arXiv:2312.01841},
  website   = {https://humanaigc.github.io/vivid-talk/},
  year      = {2023},
}