NUS-HPC-AI-Lab / SpeeDLinks
SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training
☆178Updated 4 months ago
Alternatives and similar repositories for SpeeD
Users that are interested in SpeeD are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆105Updated 11 months ago
- ☆165Updated 6 months ago
- STAR: Scale-wise Text-to-image generation via Auto-Regressive representations☆142Updated 4 months ago
- Official implementation of the paper: REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers☆263Updated 2 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆46Updated 11 months ago
- PyTorch code and model checkpoints for Score identity Distillation (SiD) and its adversarial version (SiDA)☆124Updated 2 months ago
- The official implementation for "MonoFormer: One Transformer for Both Diffusion and Autoregression"☆87Updated 8 months ago
- [ICLR 2025] OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation☆304Updated 3 weeks ago
- Adaptive Caching for Faster Video Generation with Diffusion Transformers☆150Updated 7 months ago
- Scaling Diffusion Transformers with Mixture of Experts☆339Updated 9 months ago
- official code for Diff-Instruct algorithm for one-step diffusion distillation☆76Updated 5 months ago
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆191Updated 2 months ago
- [CVPR 2025] Exploring the Deep Fusion of Large Language Models and Diffusion Transformers for Text-to-Image Synthesis☆109Updated last month
- Score identity Distillation with Long and Short Guidance for One-Step Text-to-Image Generation☆74Updated 3 months ago
- My Implementation of Adversarial Diffusion Distillation https://arxiv.org/pdf/2311.17042.pdf