lliai / Awesome-Efficient-Diffusion-ModelsLinks
Paper survey of efficient computation for large scale models.
☆34Updated 5 months ago
Alternatives and similar repositories for Awesome-Efficient-Diffusion-Models
Users that are interested in Awesome-Efficient-Diffusion-Models are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆104Updated 10 months ago
- This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality"☆47Updated 2 months ago
- [ICLR 2025] Implementation of Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding☆39Updated last month
- The official implementation for "MonoFormer: One Transformer for Both Diffusion and Autoregression"☆86Updated 7 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆31Updated 3 months ago
- VidKV: Plug-and-Play 1.x-Bit KV Cache Quantization for Video Large Language Models☆19Updated 2 months ago
- Codes accompanying the paper "Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment"☆32Updated 3 months ago
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆77Updated last year
- [ECCV 2024] Official pytorch implementation of "Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts"☆43Updated 11 months ago
- Curated list of methods that focuses on improving the efficiency of diffusion models☆45Updated 10 months ago
- A PyTorch implementation of the paper "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis"☆45Updated 11 months ago
- The official repo of continuous speculative decoding☆26Updated 2 months ago
- ☆13Updated 2 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆46Updated 10 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated last year
- The official code implementation for paper "R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Token Routing"☆24Updated this week
- [ICML 2025 Spotlight] Direct Discriminative Optimization: Supercharging Diffusion/Autoregressive with GAN☆33Updated this week
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆28Updated last year
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆14Updated last month
- Adapting LLaMA Decoder to Vision Transformer☆28Updated last year
- A Collection of Papers on Diffusion Language Models☆60Updated this week
- ☆74Updated 2 weeks ago
- ☆33Updated 4 months ago
- ✈️ Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆67Updated 2 months ago
- Data distillation benchmark☆64Updated this week
- Codebase for the paper-Elucidating the design space of language models for image generation☆45Updated 6 months ago
- Denoising Diffusion Step-aware Models (ICLR2024)☆61Updated last year
- Code for paper "Principal Components" Enable A New Language of Images☆41Updated last month
- ☆78Updated 2 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆50Updated 9 months ago