thu-nics / MixDQView external linksLinks
[ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
☆49Nov 27, 2024Updated last year
Alternatives and similar repositories for MixDQ
Users that are interested in MixDQ are comparing it to the libraries listed below
Sorting:
- [WSDM'24 Oral] The official implementation of paper <DeSCo: Towards Generalizable and Scalable Deep Subgraph Counting>☆23Mar 11, 2024Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆149Mar 21, 2025Updated 10 months ago
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆14Nov 27, 2024Updated last year
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆75Mar 17, 2025Updated 10 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 3 months ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆81Jul 28, 2025Updated 6 months ago
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆69Jan 13, 2026Updated last month
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆46Jan 27, 2026Updated 2 weeks ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 4 months ago
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Mar 21, 2024Updated last year
- Code Repository of Evaluating Quantized Large Language Models☆135Sep 8, 2024Updated last year
- The first open source triton inference engine for Stable Diffusion, specifically for sdxl☆12Nov 27, 2023Updated 2 years ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated 11 months ago
- ☆85Jan 23, 2025Updated last year
- [ICLR 2025] Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching☆20Apr 21, 2025Updated 9 months ago
- ☆109Nov 27, 2024Updated last year
- Diffusion Model as a Noise-Aware Latent Reward Model for Step-Level Preference Optimization☆61Sep 19, 2025Updated 4 months ago
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated last year
- This is the official repo for the paper "Accelerating Parallel Sampling of Diffusion Models" Tang et al. ICML 2024 https://openreview.net…☆16Jul 19, 2024Updated last year
- lite attention implemented over flash attention 3☆45Updated this week
- AAAI2023 Efficient and Accurate Models towards Practical Deep Learning Baseline☆13Nov 29, 2022Updated 3 years ago
- 将MNN拆解的简易前向推理框架(for study!)☆23Feb 21, 2021Updated 4 years ago
- ☆19Dec 10, 2021Updated 4 years ago
- The official implementation of "EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models"☆21Jul 8, 2025Updated 7 months ago
- The official code of paper "OMS-DPM: Optimizing Model Schedule for Diffusion Probabilistic Model" accepted by ICML 2023☆24Oct 11, 2023Updated 2 years ago
- ☆83Aug 18, 2023Updated 2 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆54Mar 24, 2024Updated last year
- Aiming to integrate most existing feature caching-based diffusion acceleration schemes into a unified framework.☆88Oct 23, 2025Updated 3 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆627Feb 3, 2026Updated last week
- ☆57Dec 16, 2025Updated last month
- Code for the paper “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling”☆122Updated this week
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆55Jun 26, 2025Updated 7 months ago
- Curated list of methods that focuses on improving the efficiency of diffusion models☆44Jul 9, 2024Updated last year
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆26Oct 4, 2024Updated last year
- [ECCV 2024] 3DPE: Real-time 3D-aware Portrait Editing from a Single Image☆22Sep 15, 2025Updated 4 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆23Mar 29, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 2 months ago
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆259Dec 27, 2024Updated last year