[ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
☆49Nov 27, 2024Updated last year
Alternatives and similar repositories for MixDQ
Users that are interested in MixDQ are comparing it to the libraries listed below
Sorting:
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆151Mar 21, 2025Updated 11 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆80Mar 17, 2025Updated 11 months ago
- [NeurIPS'25] The official code implementation for paper "R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Tok…☆78Updated this week
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 3 months ago
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆70Jan 13, 2026Updated last month
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆46Feb 17, 2026Updated 2 weeks ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 5 months ago
- [DATE'23] The official code for paper <CLAP: Locality Aware and Parallel Triangle Counting with Content Addressable Memory>☆23Jan 19, 2026Updated last month
- [ICIP 2025] Scribble-Guided Diffusion for Training-free Text-to-Image Generation☆24Oct 2, 2024Updated last year
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Mar 21, 2024Updated last year
- Code needed to reproduce results from my ICLR 2019 paper on fixed-point quantization of the backprop algorithm.☆10Jan 24, 2019Updated 7 years ago
- Improved the performance of 8-bit PTQ4DM expecially on FID.☆11Aug 30, 2023Updated 2 years ago
- Code Repository of Evaluating Quantized Large Language Models☆136Sep 8, 2024Updated last year
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆424Jul 5, 2025Updated 8 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆155Jan 14, 2026Updated last month
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- ☆87Jan 23, 2025Updated last year
- Demo for Qwen2.5-VL-3B-Instruct on Axera device.☆17Sep 3, 2025Updated 6 months ago
- ☆14Aug 9, 2024Updated last year
- ☆109Nov 27, 2024Updated last year
- Diffusion Model as a Noise-Aware Latent Reward Model for Step-Level Preference Optimization☆61Sep 19, 2025Updated 5 months ago
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated last year
- This is the official repo for the paper "Accelerating Parallel Sampling of Diffusion Models" Tang et al. ICML 2024 https://openreview.net…☆16Jul 19, 2024Updated last year
- aw_nas: A Modularized and Extensible NAS Framework☆252Nov 25, 2025Updated 3 months ago
- Wan: Open and Advanced Large-Scale Video Generative Models☆28Jul 28, 2025Updated 7 months ago
- Testing prompts with SDXL☆16Jul 28, 2023Updated 2 years ago
- lite attention implemented over flash attention 3☆45Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 9 months ago
- ☆19Dec 10, 2021Updated 4 years ago
- 将MNN拆解的简易前向推理框架(for study!)☆23Feb 21, 2021Updated 5 years ago
- The official code of paper "OMS-DPM: Optimizing Model Schedule for Diffusion Probabilistic Model" accepted by ICML 2023☆24Oct 11, 2023Updated 2 years ago
- The official implementation of "EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models"☆21Jul 8, 2025Updated 7 months ago
- ☆191Jan 14, 2025Updated last year
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆54Mar 24, 2024Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆161Oct 13, 2025Updated 4 months ago
- Port of C++ munkres to Tensorflow interface☆16Oct 24, 2017Updated 8 years ago
- ☆29Dec 23, 2024Updated last year
- Aiming to integrate most existing feature caching-based diffusion acceleration schemes into a unified framework.☆91Oct 23, 2025Updated 4 months ago