[ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
☆151Mar 21, 2025Updated 11 months ago
Alternatives and similar repositories for ViDiT-Q
Users that are interested in ViDiT-Q are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆74Sep 3, 2024Updated last year
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆49Nov 27, 2024Updated last year
- ☆191Jan 14, 2025Updated last year
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated last year
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Jun 4, 2024Updated last year
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆141Apr 1, 2023Updated 2 years ago
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆57Jun 26, 2025Updated 8 months ago
- [ICCV 2025] QuantCache:Adaptive Importance-Guided Quantization with Hierarchical Latent and Layer Caching for Video Generation☆15Sep 26, 2025Updated 5 months ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 5 months ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆83Jul 28, 2025Updated 7 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆761Aug 14, 2025Updated 6 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆80Mar 17, 2025Updated 11 months ago
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Mar 21, 2024Updated last year
- ☆15Mar 21, 2025Updated 11 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆260Aug 9, 2025Updated 6 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆629Feb 3, 2026Updated last month
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆167Nov 5, 2024Updated last year
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆424Jul 5, 2025Updated 8 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- PyTorch code for our paper "Progressive Binarization with Semi-Structured Pruning for LLMs"☆13Sep 28, 2025Updated 5 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 9 months ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- The official implementation of "Sparse-vDiT: Unleashing the Power of Sparse Attention to Accelerate Video Diffusion Transformers" (arXiv …☆51Jun 6, 2025Updated 9 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆817Mar 6, 2025Updated last year
- super-resolution; post-training quantization; model compression☆14Nov 10, 2023Updated 2 years ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆485Nov 26, 2024Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆374Feb 14, 2025Updated last year
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Mar 25, 2025Updated 11 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 8 months ago
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Caching☆210Mar 14, 2025Updated 11 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆952Feb 25, 2026Updated last week
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆38Jun 4, 2024Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆60Mar 25, 2025Updated 11 months ago
- [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models☆3,703Feb 14, 2026Updated 3 weeks ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆222Dec 15, 2023Updated 2 years ago