aiha-lab / MX-QLLMView external linksLinks
LLM Inference with Microscaling Format
☆34Nov 12, 2024Updated last year
Alternatives and similar repositories for MX-QLLM
Users that are interested in MX-QLLM are comparing it to the libraries listed below
Sorting:
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 2 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Nov 11, 2025Updated 3 months ago
- [NeurIPS 2023] Token-Scaled Logit Distillation for Ternary Weight Generative Language Models☆18Dec 6, 2023Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Oct 21, 2023Updated 2 years ago
- MobileLLM-R1☆75Sep 30, 2025Updated 4 months ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆220Dec 15, 2023Updated 2 years ago
- [ICML 2024] Official Repository for the paper "Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models"☆10Jul 19, 2024Updated last year
- ☆14Apr 14, 2025Updated 9 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆56Nov 20, 2024Updated last year
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- ☆15Jan 12, 2026Updated last month
- Residual vector quantization for KV cache compression in large language model☆11Oct 22, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Nov 4, 2025Updated 3 months ago
- Learning to Skip the Middle Layers of Transformers☆17Aug 7, 2025Updated 6 months ago
- [ICML 2025] MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design☆22Jul 4, 2025Updated 7 months ago
- Quantize transformers to any learned arbitrary 4-bit numeric format☆50Jan 25, 2026Updated 2 weeks ago
- Implementation of Microscaling data formats in SystemVerilog.☆29Jul 6, 2025Updated 7 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆14Feb 4, 2025Updated last year
- Mixture-of-Basis-Experts for Compressing MoE-based LLMs☆27Dec 24, 2025Updated last month
- [ICLR 2026] Official repo for "Spotlight on Token Perception for Multimodal Reinforcement Learning"☆49Jan 30, 2026Updated 2 weeks ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆67Jul 8, 2025Updated 7 months ago
- CoV: Chain-of-View Prompting for Spatial Reasoning☆50Jan 23, 2026Updated 3 weeks ago
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆13Dec 13, 2024Updated last year
- ☆13Jul 25, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- Code for the paper “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling”☆122Updated this week
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆36Jun 20, 2025Updated 7 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆86Jul 11, 2025Updated 7 months ago
- MICRO 2024 Evaluation Artifact for FuseMax☆16Aug 26, 2024Updated last year
- ☆15Apr 11, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆356Nov 20, 2025Updated 2 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 7 months ago
- ☆15Sep 22, 2024Updated last year
- TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。☆15Jun 1, 2023Updated 2 years ago
- super-resolution; post-training quantization; model compression☆14Nov 10, 2023Updated 2 years ago
- Why Low-Precision Transformer Training Fails: An Analysis on Flash Attention☆44Oct 16, 2025Updated 3 months ago