Wan-Animate: Unified Character Animation and Replacement with Holistic Replication

Tongyi Lab, Alibaba

Wan-Animate can animate any character based on a performer's video, precisely replicating the performer's facial expressions and movements to generate highly realistic character videos.

Wan-Animate can replace characters in a video with animated characters, preserving their expressions and movements while also replicating the original lighting and color tone for seamless environmental integration."

Abstract

We introduce Wan-Animate, a unified framework for character animation and replacement. Given a character image and a reference video, Wan-Animate can animate the character by precisely replicating the expressions and movements of the character in the video to generate high-fidelity character videos. Alternatively, it can integrate the animated character into the reference video to replace the original character, replicating the scene's lighting and color tone to achieve seamless environmental integration. Wan-Animate is built upon the Wan model. To adapt it for character animation tasks, we employ a modified input paradigm to differentiate between reference conditions and regions for generation. This design unifies multiple tasks into a common symbolic representation. We use spatially-aligned skeleton signals to replicate body motion and implicit facial features extracted from source images to reenact expressions, enabling the generation of character videos with high controllability and expressiveness. Furthermore, to enhance environmental integration during character replacement, we develop an auxiliary Relighting LoRA. This module preserves the character's appearance consistency while applying the appropriate environmental lighting and color tone. Experimental results demonstrate that Wan-Animate achieves state-of-the-art performance. We are committed to open-sourcing the model weights and its source code.

Method

MY ALT TEXT

Overview of Wan-Animate, which is built upon Wan-I2V. We modify its input formulation to unify reference image input, temporal frame guidance, and environmental information (for dual-mode compatibility) under a common symbolic representation. For body motion control, we use skeleton signals that are merged via spatial alignment. For facial expression control, we leverage implicit features extracted from face images as the driving signal. Additionally, for character replacement, we train an auxiliary Relighting LoRA to enhance the character's integration with the new environment.

place gif

Results Gallery



Expressive Human Animation

Generalizable Arbitrary Character Animation

-->

Dynamic Motion and Camera

Character Replacement with Seamless Environmental Integration

Consistent Lighting and Color Tone Replication

Qualitative Comparisons