prathebaselva / FORA
FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.
☆28Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for FORA
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆71Updated 3 months ago
- ☆96Updated last month
- Adaptive Caching for Faster Video Generation with Diffusion Transformers☆60Updated this week
- Rectified Diffusion: Straightness Is Not Your Need☆117Updated last week
- ☆44Updated 2 months ago
- PyTorch code for Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆33Updated 2 months ago
- 📚 Collection of awesome generation acceleration resources.☆39Updated this week
- Official PyTorch Implementation of "Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models"☆30Updated last month
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆50Updated 5 months ago
- Official PyTorch implmentation of paper "T-Stitch: Accelerating Sampling in Pre-trained Diffusion Models with Trajectory Stitching"☆95Updated 8 months ago
- "SlimFlow: Training Smaller One-Step Diffusion Models with Rectified Flow", Yuanzhi Zhu, Xingchao Liu, Qiang Liu☆38Updated 2 weeks ago
- official code for Diff-Instruct algorithm for one-step diffusion distillation☆46Updated 7 months ago
- [ECCV 2024] Official pytorch implementation of "Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts"☆32Updated 4 months ago
- SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training☆160Updated 3 weeks ago
- CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method☆26Updated 6 months ago
- Vico: Compositional Video Generation as Flow Equalization☆50Updated 4 months ago
- Open(MM)DiT: An Easy, Fast and Memory-Efficient System for (MM)DiT Training and Inference☆21Updated 7 months ago
- Accelerating Diffusion Transformers with Token-wise Feature Caching☆19Updated this week
- ☆28Updated 4 months ago
- Official PyTorch Implementation of "Scalable Autoregressive Image Generation with Mamba"☆108Updated 2 months ago
- Scaling RWKV-Like Architectures for Diffusion Models☆117Updated 6 months ago
- STAR: Scale-wise Text-to-image generation via Auto-Regressive representations☆117Updated 4 months ago
- The codebase of our paper "Improving the Training of Rectified Flows"☆79Updated 3 weeks ago
- Scaling Diffusion Transformers with Mixture of Experts☆202Updated 2 months ago
- [Interspeech 2024] LiteFocus is a tool designed to accelerate diffusion-based TTA model, now implemented with the base model AudioLDM2.☆33Updated 3 months ago
- Implementation of Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding☆21Updated this week
- Score identity Distillation with Long and Short Guidance for One-Step Text-to-Image Generation☆33Updated 2 months ago
- ☆103Updated 8 months ago
- TerDiT: Ternary Diffusion Models with Transformers☆61Updated 4 months ago
- Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation☆38Updated 11 months ago