aiha-lab / MX-QLLM
LLM Inference with Microscaling Format
☆11Updated last month
Alternatives and similar repositories for MX-QLLM:
Users that are interested in MX-QLLM are comparing it to the libraries listed below
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆25Updated 2 months ago
- ☆16Updated last month
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆46Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆31Updated last year
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆29Updated 5 months ago
- ☆21Updated last month
- The official implementation of the DAC 2024 paper GQA-LUT☆11Updated 3 months ago
- TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆24Updated this week
- AFPQ code implementation☆18Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆53Updated 9 months ago
- ☆22Updated last month
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆24Updated 5 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆27Updated 6 months ago
- [ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning☆19Updated 2 years ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆13Updated 4 months ago
- SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆26Updated 4 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆29Updated 3 weeks ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆30Updated 3 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆36Updated 9 months ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆30Updated 2 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆20Updated 9 months ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆20Updated 2 weeks ago
- ☆26Updated 8 months ago
- BESA is a differentiable weight pruning technique for large language models.☆14Updated 9 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆45Updated last month
- ACL 2023☆38Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆85Updated this week
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆32Updated 4 months ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆13Updated 11 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆44Updated last year