microsoft / RAS
An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional variability in sampling steps
☆119Updated last month
Alternatives and similar repositories for RAS:
Users that are interested in RAS are comparing it to the libraries listed below
- ☆130Updated this week
- Adaptive Caching for Faster Video Generation with Diffusion Transformers☆142Updated 4 months ago
- Accelerating Diffusion Transformers with Token-wise Feature Caching☆112Updated 2 weeks ago
- ☆153Updated 2 months ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆192Updated last month
- Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆199Updated last month
- GenEval: An object-focused framework for evaluating text-to-image alignment☆204Updated 3 weeks ago
- [ICLR 2025] OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation☆256Updated last month
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆205Updated 3 months ago
- SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training☆177Updated 2 months ago
- The code of our work "Golden Noise for Diffusion Models: A Learning Framework".☆144Updated last month
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆98Updated 8 months ago
- Official PyTorch and Diffusers Implementation of "LinFusion: 1 GPU, 1 Minute, 16K Image"☆297Updated 3 months ago
- [CVPR2025] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-project/☆127Updated last week
- Video-Infinity generates long videos quickly using multiple GPUs without extra training.☆174Updated 7 months ago
- 📚 Collection of awesome generation acceleration resources.☆179Updated 2 weeks ago
- Context parallel attention that accelerates DiT model inference with dynamic caching☆228Updated this week
- Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think!☆78Updated 3 weeks ago
- ☆80Updated 4 months ago
- SpargeAttention: A training-free sparse attention that can accelerate any model inference.☆328Updated 2 weeks ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆41Updated 8 months ago
- [NeurIPS 2024] CV-VAE: A Compatible Video VAE for Latent Generative Video Models☆268Updated 3 months ago
- STAR: Scale-wise Text-to-image generation via Auto-Regressive representations☆137Updated last month
- Code repository for T2V-Turbo and T2V-Turbo-v2☆293Updated last month
- ☆49Updated last year
- X2I: Seamless Integration of Multimodal Understanding into Diffusion Transformer via Attention Distillation☆44Updated this week
- DiT for VAE (and Video Generation)☆32Updated 6 months ago
- Video Diffusion Alignment via Reward Gradients. We improve a variety of video diffusion models such as VideoCrafter, OpenSora, ModelScope…☆250Updated 2 weeks ago
- [CVPR 2025] Official code of "DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Long…☆240Updated last week
- Subjects200K dataset☆103Updated 2 months ago