test-time-training / ttt-video-ditLinks
Official PyTorch implementation of One-Minute Video Generation with Test-Time Training
☆2,051Updated 2 months ago
Alternatives and similar repositories for ttt-video-dit
Users that are interested in ttt-video-dit are comparing it to the libraries listed below
Sorting:
- SkyReels V1: The first and most advanced open-source human-centric video foundation model☆2,250Updated 5 months ago
- ☆3,091Updated 5 months ago
- MAGI-1: Autoregressive Video Generation at Scale☆3,454Updated 2 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,149Updated 2 months ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo☆1,629Updated 3 months ago
- Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,109Updated 3 months ago
- A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gem…☆1,584Updated 3 weeks ago
- SkyReels-V2: Infinite-length Film Generative model☆4,240Updated last week
- Implementation of "EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer"(ICCV2025)☆1,643Updated 3 weeks ago
- Wan: Open and Advanced Large-Scale Video Generative Models☆3,475Updated 2 weeks ago
- Open-source unified multimodal model☆4,829Updated last month
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,368Updated last month
- Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation☆2,211Updated this week
- ☆1,800Updated 2 months ago
- [ICCV 2025] 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,207Updated this week
- CogView4, CogView3-Plus and CogView3(ECCV 2024)☆1,083Updated 4 months ago
- [CVPR2025 Highlight] Video Generation Foundation Models: https://saiyan-world.github.io/goku/☆2,876Updated 6 months ago
- Qwen-Image is a powerful image generation foundation model capable of complex text rendering and precise image editing.☆2,981Updated this week
- ☆751Updated 6 months ago
- Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels with Hunyuan3D World Model☆1,851Updated this week
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,491Updated last month
- [ICCV'25 Oral] ReCamMaster: Camera-Controlled Generative Rendering from A Single Video☆1,385Updated 3 weeks ago
- Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Training released! Surpasses GPT-4o in ID persisten…☆1,886Updated 3 months ago
- ☆1,023Updated 3 months ago
- ☆2,392Updated last month
- The official implementation of CVPR'25 Oral paper "Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped No…☆1,003Updated 2 weeks ago
- A unified inference and post-training framework for accelerated video generation.☆1,994Updated last week
- OmniGen2: Exploration to Advanced Multimodal Generation.☆3,743Updated 3 weeks ago
- Implementation of [CVPR 2025] "DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation"☆845Updated 6 months ago
- Stable Virtual Camera: Generative View Synthesis with Diffusion Models☆1,410Updated 2 months ago