lixinyyang / MoDALinks
MoDA: Multi-modal Diffusion Architecture for Talking Head Generation
☆267Updated 3 months ago
Alternatives and similar repositories for MoDA
Users that are interested in MoDA are comparing it to the libraries listed below
Sorting:
- [CVPR'25] Official PyTorch implementation of AvatarArtist: Open-Domain 4D Avatarization.☆275Updated 5 months ago
- [AAAI 2026] Playmate2: Training-Free Multi-Character Audio-Driven Animation via Diffusion Transformer with Reward Feedback☆124Updated 3 weeks ago
- Efficient DiT architecture for text2any tasks, ICLR2025☆449Updated 7 months ago
- [SIGGRAPH'25] SOAP: Style-Omniscient Animatable Portraits☆441Updated 4 months ago
- [ICCV2025 Highlight] DicFace: Dirichlet-Constrained Variational Codebook Learning for Temporally Coherent Video Face Restoration☆444Updated 4 months ago
- The repository for 'Tri$^{2}$-plane: Volumetric Avatar Reconstruction with Feature Pyramid'☆141Updated 7 months ago
- Official Pytorch Implementation for the paper FastAvatar ..☆137Updated 3 weeks ago
- Implementation of paper: Flux Already Knows – Activating Subject-Driven Image Generation without Training☆138Updated 3 months ago
- Official implementation of "JavisDiT: Joint Audio-Video Diffusion Transformer with Hierarchical Spatio-Temporal Prior Synchronization"☆296Updated last week
- CVPR 2025 Highlight☆38Updated 3 months ago
- ☆74Updated 8 months ago
- Unofficial Implementation of ReplaceAnything: https://aigcdesigngroup.github.io/replace-anything/☆400Updated last year
- A curated list of papers, code and resources pertaining to image composition/compositing or object insertion/addition/compositing, which …