deepbrainai-research / floatLinks
[ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.
☆441Updated last month
Alternatives and similar repositories for float
Users that are interested in float are comparing it to the libraries listed below
Sorting:
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆284Updated 4 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆375Updated this week
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆434Updated 4 months ago
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆324Updated 3 weeks ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆641Updated last month
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆280Updated 6 months ago
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆490Updated 4 months ago
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆237Updated last month
- [ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation☆377Updated 3 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆450Updated 3 months ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆702Updated last week
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆531Updated 2 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆574Updated 6 months ago
- [ICLR 2025] Animate-X - PyTorch Implementation☆304Updated 11 months ago
- Official implementation of EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars☆391Updated 8 months ago
- Diffusion-based Portrait and Animal Animation☆849Updated 2 weeks ago
- 🤢 LipSick: Fast, High Quality, Low Resource Lipsync Tool 🤮☆222Updated last year
- A 2D customized lip-sync model for high-fidelity real-time driving.☆118Updated 6 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆375Updated last month
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆678Updated last month
- Fast running Live Portrait with TensorRT and ONNX models☆172Updated last year
- [CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation☆261Updated 10 months ago
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆190Updated last year
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆1,039Updated this week
- Emote Portrait Alive - using ai to reverse engineer code from white paper. (abandoned)☆185Updated last year
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆345Updated last month
- [CVPR 2025 Workshop] CatV2TON is a lightweight DiT-based visual virtual try-on model, capable of supporting try-on for both images and vi…☆189Updated 10 months ago
- MoCha: End-to-End Video Character Replacement without Structural Guidance☆521Updated last month
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆492Updated this week
- [SIGGRAPH Asia 25] Voost: A Unified and Scalable Diffusion Transformer for Bidirectional Virtual Try-On and Try-Off☆331Updated 2 months ago