character-ai / OviView external linksLinks
☆1,607Nov 15, 2025Updated 2 months ago
Alternatives and similar repositories for Ovi
Users that are interested in Ovi are comparing it to the libraries listed below
Sorting:
- ☆76Dec 8, 2025Updated 2 months ago
- ComfyUI custom nodes for Ovi joint video+audio generation☆46Oct 6, 2025Updated 4 months ago
- [ICLR 2026] LongLive: Real-time Interactive Long Video Generation☆1,040Jan 27, 2026Updated 2 weeks ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,204Oct 15, 2025Updated 3 months ago
- DreamID-V: Bridging the Image-to-Video Gap for High-Fidelity Face Swapping via Diffusion Transformer☆504Jan 13, 2026Updated last month
- Pusa: Thousands Timesteps Video Diffusion Model☆672Feb 6, 2026Updated last week
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,625Oct 17, 2025Updated 3 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,479Sep 11, 2025Updated 5 months ago
- 📹 A more flexible framework that can generate videos at any resolution and creates videos from images.☆1,891Updated this week
- The official code of Yume☆612Jan 14, 2026Updated 3 weeks ago
- ☆2,053Dec 20, 2025Updated last month
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆501Aug 20, 2025Updated 5 months ago
- Scalable and memory-optimized training of diffusion models☆1,335Jun 4, 2025Updated 8 months ago
- (CVPR 2025) From Slow Bidirectional to Fast Autoregressive Video Diffusion Models☆1,202Aug 7, 2025Updated 6 months ago
- Official repository for LTX-Video☆9,235Jan 5, 2026Updated last month
- Official inference code and LongText-Bench benchmark for our paper X-Omni (https://arxiv.org/pdf/2507.22058).☆420Aug 26, 2025Updated 5 months ago
- ☆173Sep 17, 2025Updated 4 months ago
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆344Oct 30, 2025Updated 3 months ago
- Official implementation for "Story2Board: A Training‑Free Approach for Expressive Storyboard Generation"☆229Aug 22, 2025Updated 5 months ago
- [AAAI 2026] Personalize Anything for Free with Diffusion Transformer☆353Mar 20, 2025Updated 10 months ago
- [ICLR 2026] Official Repo For "BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration"☆367Jan 28, 2026Updated 2 weeks ago
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆1,133Jan 25, 2026Updated 2 weeks ago
- A unified inference and post-training framework for accelerated video generation.☆3,059Updated this week
- [NeurIPS 2024] Boosting the performance of consistency models with PCM!☆512Dec 11, 2024Updated last year
- High-Quality Text-to-Video Generation with Alpha Channel☆329Dec 16, 2025Updated last month
- ☆2,498Jul 16, 2025Updated 6 months ago
- [AAAI-2026]FlashVideo: Flowing Fidelity to Detail for Efficient High-Resolution Video Generation☆456Mar 5, 2025Updated 11 months ago
- A fast AI Video Generator for the GPU Poor. Supports Wan 2.1/2.2, Qwen Image, Hunyuan Video, LTX Video and Flux.☆4,363Updated this week
- Official code for StoryMem: Multi-shot Long Video Storytelling with Memory☆644Jan 22, 2026Updated 3 weeks ago
- Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight)☆3,135Sep 12, 2025Updated 5 months ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆725Dec 21, 2025Updated last month
- Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset☆271Jun 10, 2025Updated 8 months ago
- [ICCV 2025] 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,350Sep 12, 2025Updated 5 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,620Jan 26, 2026Updated 2 weeks ago
- Unofficial extension implementation of CausVid☆73Apr 28, 2025Updated 9 months ago
- Industry-level video foundation model for unified Text-to-Video (T2V) and Image-to-Video (I2V) generation.☆886Aug 27, 2025Updated 5 months ago
- OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer☆215Jan 26, 2026Updated 2 weeks ago
- [Preprint 2025] Ditto: Scaling Instruction-Based Video Editing with a High-Quality Synthetic Dataset☆566Oct 29, 2025Updated 3 months ago
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆2,087Feb 6, 2026Updated last week