Code for data-aware compression of DeepSeek models
☆72Dec 11, 2025Updated 4 months ago
Alternatives and similar repositories for MoE-Quant
Users that are interested in MoE-Quant are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Work in progress.☆79Nov 25, 2025Updated 4 months ago
- ☆103Feb 26, 2026Updated last month
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆383Feb 14, 2025Updated last year
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆88Jul 28, 2025Updated 8 months ago
- ☆16Sep 27, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Aug 21, 2025Updated 7 months ago
- Code for ICML 2022 paper "SPDY: Accurate Pruning with Speedup Guarantees"☆20May 3, 2023Updated 2 years ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆30Jun 30, 2025Updated 9 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Feb 13, 2024Updated 2 years ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆175Nov 11, 2025Updated 5 months ago
- IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse☆73Mar 14, 2026Updated 3 weeks ago
- Fast and memory-efficient exact attention☆20Mar 13, 2026Updated 3 weeks ago
- ☆38Aug 7, 2025Updated 8 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆87Jan 23, 2025Updated last year
- This repository contains the training code of ParetoQ introduced in our work "ParetoQ Scaling Laws in Extremely Low-bit LLM Quantization"☆123Oct 15, 2025Updated 5 months ago
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆185Apr 16, 2024Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆304Mar 10, 2026Updated last month
- ☆140Aug 18, 2025Updated 7 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- [ICLR 2025, IEEE TPAMI 2026] Mixture Compressor & MC#☆70Feb 12, 2025Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,051Sep 4, 2024Updated last year
- ☆46May 24, 2025Updated 10 months ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆212Nov 25, 2025Updated 4 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 5 months ago
- The official implementation of Bi-Mamba☆15Oct 22, 2025Updated 5 months ago
- ☆65Apr 26, 2025Updated 11 months ago
- [ICML 2024] Official Repository for the paper "Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models"☆10Jul 19, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆51Oct 21, 2023Updated 2 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Mar 8, 2021Updated 5 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆390Apr 13, 2025Updated 11 months ago
- ☆10Nov 16, 2024Updated last year
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆72Jul 8, 2025Updated 9 months ago
- Vector Approximate Message Passing inference framework for GWAS☆19Jan 14, 2026Updated 2 months ago
- An implementation of LazyLLM token pruning for LLaMa 2 model family.☆13Jan 6, 2025Updated last year
- ☆12Aug 22, 2023Updated 2 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆822Mar 6, 2025Updated last year