antgroup / animate-xLinks
[ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation
☆367Updated last month
Alternatives and similar repositories for animate-x
Users that are interested in animate-x are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Animate-X - PyTorch Implementation☆307Updated 9 months ago
- [Siggraph Asia 2024 & IJCV 2025] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and…☆424Updated 6 months ago
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆339Updated 2 weeks ago
- [CVPR 2025 Workshop] CatV2TON is a lightweight DiT-based visual virtual try-on model, capable of supporting try-on for both images and vi…☆183Updated 8 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆566Updated 5 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Diffusion Models☆226Updated last year
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆275Updated 5 months ago
- [ICCV2025] MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆237Updated 4 months ago
- ☆94Updated last year
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆424Updated 4 months ago
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆508Updated 4 months ago
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆527Updated 3 weeks ago
- [CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation☆259Updated 9 months ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆659Updated 2 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆754Updated 11 months ago
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆480Updated 2 months ago
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆308Updated 5 months ago
- [ICCV 2025] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆481Updated 2 weeks ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆273Updated 3 months ago
- NeurIPS 2024☆391Updated last year
- ☆383Updated last year
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆374Updated 9 months ago
- The official code implementation of the paper "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data."☆409Updated 5 months ago
- Official implementation of MAGREF: Masked Guidance for Any-Reference Video Generation with Subject Disentanglement☆271Updated last month
- reproduction of AnimateAnyone☆168Updated last year
- ☆531Updated 4 months ago
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆337Updated 3 weeks ago
- HunyuanVideo Keyframe Control Lora is an adapter for HunyuanVideo T2V model for keyframe-based video generation☆165Updated 7 months ago
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆798Updated 3 weeks ago
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆422Updated 2 months ago