Code for data-aware compression of DeepSeek models
☆71Dec 11, 2025Updated 2 months ago
Alternatives and similar repositories for MoE-Quant
Users that are interested in MoE-Quant are comparing it to the libraries listed below
Sorting:
- Work in progress.☆79Nov 25, 2025Updated 3 months ago
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆373Feb 14, 2025Updated last year
- ☆97Nov 16, 2025Updated 3 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Aug 21, 2025Updated 6 months ago
- Fast and memory-efficient exact attention☆18Feb 23, 2026Updated last week
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- ☆16Sep 27, 2023Updated 2 years ago
- ☆19Nov 5, 2025Updated 3 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆168Nov 11, 2025Updated 3 months ago
- ☆38Aug 7, 2025Updated 6 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆285Oct 19, 2025Updated 4 months ago
- ☆85Jan 23, 2025Updated last year
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆184Apr 16, 2024Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆161Oct 13, 2025Updated 4 months ago
- ☆134Aug 18, 2025Updated 6 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆66Feb 12, 2025Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,025Sep 4, 2024Updated last year
- Code for ICML 2022 paper "SPDY: Accurate Pruning with Speedup Guarantees"☆20May 3, 2023Updated 2 years ago
- FireQ: Fast INT4-FP8 Kernel and RoPE-aware Quantization for LLM Inference Acceleration☆20Jun 27, 2025Updated 8 months ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Oct 21, 2023Updated 2 years ago
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- ☆65Apr 26, 2025Updated 10 months ago
- Benchmark evaluating LLMs on their ability to create and resist disinformation. Includes comprehensive testing across major models (Claud…☆31Mar 20, 2025Updated 11 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆211Nov 25, 2025Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Dec 4, 2025Updated 2 months ago
- Official implementation of the ICLR 2024 paper AffineQuant☆28Mar 30, 2024Updated last year
- Official PyTorch implementation of "GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance" (ICML 2025)☆50Jul 6, 2025Updated 7 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆388Apr 13, 2025Updated 10 months ago
- Pytorch implementation of our UniQ method, IEEE Access -- Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric …☆11Apr 7, 2021Updated 4 years ago
- Rethinking the User Interface of AI☆32Updated this week
- NVIDIA cuTile learn☆163Dec 9, 2025Updated 2 months ago
- ☆41Mar 28, 2024Updated last year
- ☆40Nov 22, 2025Updated 3 months ago
- Efficient non-uniform quantization with GPTQ for GGUF☆60Sep 17, 2025Updated 5 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆149Mar 21, 2025Updated 11 months ago
- ☆13Nov 5, 2024Updated last year
- Building the Virtuous Cycle for AI-driven LLM Systems☆186Feb 19, 2026Updated last week