MotionFollower:

Editing Video Motion via Lightweight Score-Guided Diffusion

Shuyuan Tu1,
Qi Dai2,
Zihao Zhang1,
Sicheng Xie1,
Zhi-Qi Cheng3,
Chong Luo2,
Xintong Han4,
Zuxuan Wu1
Yu-Gang Jiang1
  1Fudan University   2Microsoft Research Asia   3Carnegie Mellon University   4Huya Inc

TL;DR: We propose MotionFollower, a lightweight motion editing method for transferring motion from target video to source while keeping source background, protagonists' appearance, and camera movement.

Source Target MotionEditor MotionFollower

Editing Results


Video Motion Editing

MotionFollower aims to manipulate the motion of a video in alignment with a target video, which is regarded as a higher-level and more challenging video editing scenario---motion editing.

Source Target Edited Result Source Target Edited Result

Qualitative Comparisons

Video motion editing resutls for comparisons between MotionFollower and baselines.

Source Target MotionEditor MotionFollower
Source Target MotionEditor MotionFollower
Source Target MotionEditor MotionFollower

Human Motion Transfer Comparison

Comprisons between MotionFollower and SOTA baselines for human motion transfer, i.e., animating the given source image using motion sequences from a different video.



Ablation Study

Ablation study regarding the core components of MotionFollower.

Source Target w/o ReCtr (w/o ReCtr)+RNet (w/o PoCtr)+CNet w/o guidance MotionFollower

Framework


In training, two lightweight signal controllers and U-Net are trainable. The model is first trained with single frame (first stage), then with video clip (second stage). In inference, we build a two-branch structure, one for reconstruction and the other for editing. Score guidance is computed using features from the two branches, which is then used to update the latent.

BibTeX

@article{tu2024motionfollower,
  title={MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion},
  author={Shuyuan Tu and Qi Dai and Zihao Zhang and Sicheng Xie and Zhi-Qi Cheng and Chong Luo and Xintong Han and Zuxuan Wu and Yu-Gang Jiang},
  journal={arXiv preprint arXiv:2405.20325},
  year={2024}
}