thu-nics / qllm-eval
Code Repository of Evaluating Quantized Large Language Models
☆89Updated last week
Related projects: ⓘ
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆79Updated this week
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆46Updated last year
- ☆102Updated 3 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆69Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆161Updated 2 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆38Updated 10 months ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆42Updated 5 months ago
- This repository contains integer operators on GPUs for PyTorch.☆172Updated 11 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆95Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution.☆59Updated last week
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆50Updated 6 months ago
- ☆75Updated 10 months ago
- ☆127Updated last month
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆49Updated 2 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆31Updated last month
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆258Updated 2 months ago
- ☆33Updated 5 months ago
- ☆39Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆55Updated 5 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆134Updated 2 months ago
- Awesome list for LLM pruning.☆130Updated 3 weeks ago
- Code for QuaRot, an end-to-end 4-bit inference of large language models.☆256Updated last month
- 16-fold memory access reduction with nearly no loss☆35Updated last month
- ☆37Updated 5 months ago
- AFPQ code implementation☆15Updated 10 months ago
- [NeurIPS 2023] Token-Scaled Logit Distillation for Ternary Weight Generative Language Models☆15Updated 9 months ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆47Updated 3 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆87Updated 3 months ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆156Updated 9 months ago
- KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆213Updated 3 weeks ago