Dere-Wah / Self-Forcing-EndlessLinks
Make self forcing endless. Add cache purging. Add prompt controllability.
☆68Updated 4 months ago
Alternatives and similar repositories for Self-Forcing-Endless
Users that are interested in Self-Forcing-Endless are comparing it to the libraries listed below
Sorting:
- Collection of scripts to build small-scale datasets for fine-tuning video generation models.☆78Updated 9 months ago
- The official implementation of ”RepVideo: Rethinking Cross-Layer Representation for Video Generation“☆123Updated 11 months ago
- This is the official implementation of SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation.☆114Updated last year
- DC-VideoGen: Efficient Video Generation with Deep Compression Video Autoencoder☆175Updated 3 months ago
- Learning Motion from Low-Rank Adaptation☆46Updated last year
- Official PyTorch Implementation of "SVG-T2I: Scaling up Text-to-Image Latent Diffusion Model Without Variational Autoencoder".☆118Updated 3 weeks ago
- Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Model (Arxiv 2025)☆38Updated 6 months ago
- An official implementation of SwapAnyone.☆72Updated 10 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆49Updated 9 months ago
- UniVideo: Unified Understanding, Generation, and Editing for Videos☆95Updated last week
- ☆106Updated 4 months ago
- VideoCoF: Unified Video Editing with Temporal Reasoner☆123Updated last week
- ☆85Updated last year
- ☆29Updated 9 months ago
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Updated last year
- ☆52Updated last week
- Official implementation of UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified …☆73Updated last year
- Blending Custom Photos with Video Diffusion Transformers☆48Updated 11 months ago
- [NeurIPS 2024] Official Implementation of Attention Interpolation of Text-to-Image Diffusion☆107Updated last year
- [AAAI 2026] Official implementation of DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation☆76Updated 7 months ago
- This is the official implementation of "T-LoRA: Single Image Diffusion Model Customization Without Overfitting"☆125Updated 6 months ago
- ☆66Updated last year
- Omegance: A Single Parameter for Various Granularities in Diffusion-Based Synthesis (ICCV, 2025)☆52Updated 3 months ago
- Distilling Diversity and Control in Diffusion Models☆50Updated 8 months ago
- [WACV 2025] MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning☆96Updated 8 months ago
- Official implementation for "pOps: Photo-Inspired Diffusion Operators"☆84Updated last year
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆88Updated last year
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆65Updated 8 months ago
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.☆130Updated 6 months ago
- pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation☆245Updated 2 weeks ago