Exploring Timeline Control for Facial Motion Generation

1Tsinghua University, 2Institute for Intelligent Computing, Alibaba Group
MY ALT TEXT

We introduce a new control signal for facial motion generation: timeline control. We first utilize a labor-efficient approach to annotate the time intervals of facial motion at a frame-level granularity. Using the annotations, we propose a model that can generate natural facial motions aligned with an input timeline. Compared to previous controls like audio and text, timeline control enables precise temporal control of facial motions. In this paper, facial motions are rendered into photorealistic videos for better visualization.

Abstract

This paper introduces a new control signal for facial motion generation: timeline control. Compared to audio and text signals, timelines provide more fine-grained control, such as generating specific facial motions with precise timing. Users can specify a multi-track timeline of facial actions arranged in temporal intervals, allowing precise control over the timing of each action. To model the timeline control capability, We first annotate the time intervals of facial actions in natural facial motion sequences at a frame-level granularity. This process is facilitated by Toeplitz Inverse Covariance-based Clustering to minimize human labor. Based on the annotations, we propose a diffusion-based generation model capable of generating facial motions that are natural and accurately aligned with input timelines. Our method supports text-guided motion generation by using ChatGPT to convert text into timelines. Experimental results show that our method can annotate facial action intervals with satisfactory accuracy, and produces natural facial motions accurately aligned with timelines.

Facial Motion Annotation

MY ALT TEXT

The pipeline of frame-level facial motion annotation (using brow motions as an example). We first extract facial motion descriptors (blendshapes) from natural facial motion videos and concatenate the results to create a facial motion time series for time series analysis. This analysis can simultaneously segment the sequence into a series of motion patterns and cluster similar patterns, resulting in multiple clusters, each containing consistent facial motion patterns. Then, by inspecting a few patterns, we identify the facial motions each cluster represents, thereby obtaining frame-level facial motion annotations for all videos.

Facial Motion Generation

MY ALT TEXT

Illustration of generation model. (a) Base-Branch Design. The base network takes the timelines of all facial regions as input and outputs base features that model the global facial motion couplings. Through timeline selection, each region's timeline is directed to its respective branch network. Since head pose is interconnected with all facial movements, the pose branch receives timelines of all regions. Each branch network takes the timeline of its corresponding region to generate the facial motions for that region. These motions are then combined to produce the overall motion of the entire face. Lin. Proj. denotes Linear Projection. (b) Base/Branch Network's Architecture. Timeline control guides motion generation through cross-attention. The initial timeline tokens remain unchanged and are added at each layer. The diffusion step (omitted in sub-figure (a) for clarity) is applied to each base and branch network.

place gif

Result



Facial Motion Generation from Timeline

Video Edit on Timeline

Text Control Timeline Generation

Timeline Annotation