Pusa: Thousands Timesteps Video Diffusion Model
β672Feb 13, 2026Updated 3 weeks ago
Alternatives and similar repositories for Pusa-VidGen
Users that are interested in Pusa-VidGen are comparing it to the libraries listed below
Sorting:
- Code for full fintuing Mochi model with FSDP (and CP)β30Jul 15, 2025Updated 7 months ago
- πΉ A more flexible framework that can generate videos at any resolution and creates videos from images.β1,948Updated this week
- (CVPR 2025) From Slow Bidirectional to Fast Autoregressive Video Diffusion Modelsβ1,244Aug 7, 2025Updated 7 months ago
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editingβ3,671Oct 17, 2025Updated 4 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generationβ1,209Oct 15, 2025Updated 4 months ago
- Scalable and memory-optimized training of diffusion modelsβ1,341Jun 4, 2025Updated 9 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignmentβ1,490Sep 11, 2025Updated 5 months ago
- [AAAI-2026]FlashVideo: Flowing Fidelity to Detail for Efficient High-Resolution Video Generationβ459Mar 5, 2025Updated last year
- Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Datasetβ283Jun 10, 2025Updated 9 months ago
- A unified inference and post-training framework for accelerated video generation.β3,127Updated this week
- MAGI-1: Autoregressive Video Generation at Scaleβ3,647Jun 17, 2025Updated 8 months ago
- Enhance-A-Video: Better Generated Video for Freeβ593Mar 17, 2025Updated 11 months ago
- A pipeline parallel training script for diffusion models.β1,869Feb 8, 2026Updated last month
- Let's finetune video generation models!β545Sep 15, 2025Updated 5 months ago
- SkyReels-A2: Compose anything in video diffusion transformersβ704Jun 3, 2025Updated 9 months ago
- πΊ An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusionβ2,250Mar 6, 2025Updated last year
- We achieves high-quality first-frame guided video editing given a reference image, while maintaining flexibility for incorporating additiβ¦β324Feb 25, 2026Updated last week
- β92Jul 11, 2025Updated 7 months ago
- β227Jul 17, 2025Updated 7 months ago
- [arXiv] On-device Sora: Enabling Diffusion-Based Text-to-Video Generation for Mobile Devicesβ133Nov 27, 2025Updated 3 months ago
- [SIGGRAPH 2025] Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Controlβ808Jun 9, 2025Updated 9 months ago
- Video Diffusion Alignment via Reward Gradients. We improve a variety of video diffusion models such as VideoCrafter, OpenSora, ModelScopeβ¦β311Mar 12, 2025Updated 11 months ago
- [ICLR 2025] OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generationβ401May 30, 2025Updated 9 months ago
- An official implementation of EvoSearch: Scaling Image and Video Generation via Test-Time Evolutionary Searchβ100Oct 3, 2025Updated 5 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attentionβ633Updated this week
- [ICCV'25 Best Paper Finalist] ReCamMaster: Camera-Controlled Generative Rendering from A Single Videoβ1,756Nov 28, 2025Updated 3 months ago
- The official implementation of CVPR'25 Oral paper "Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noβ¦β1,067Oct 13, 2025Updated 4 months ago
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformersβ503Aug 20, 2025Updated 6 months ago
- UniAnimate-DiT: Human Image Animation with Large-Scale Video Diffusion Transformerβ836Apr 27, 2025Updated 10 months ago
- [ICCV 2025] π₯π₯ UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioningβ1,353Sep 12, 2025Updated 5 months ago
- The official code of Yumeβ621Jan 14, 2026Updated last month
- [ICLR'25] SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpointsβ680May 23, 2025Updated 9 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesisβ1,621Jan 26, 2026Updated last month
- [NeurIPS 2025] Improving Video Generation with Human Feedbackβ429Sep 24, 2025Updated 5 months ago
- Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight)β3,180Sep 12, 2025Updated 5 months ago
- Code for Paper 'Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep Approach'β35Jan 2, 2026Updated 2 months ago
- Official codes of VEnhancer: Generative Space-Time Enhancement for Video Generationβ567Sep 16, 2024Updated last year
- [ICLR 2025] Pyramidal Flow Matching for Efficient Video Generative Modelingβ3,162Dec 21, 2024Updated last year
- Lumina-T2X is a unified framework for Text to Any Modality Generationβ2,253Feb 16, 2025Updated last year