[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
☆1,106Oct 7, 2024Updated last year
Alternatives and similar repositories for LLM-Pruner
Users that are interested in LLM-Pruner are comparing it to the libraries listed below
Sorting:
- A simple and effective LLM pruning approach.☆849Aug 9, 2024Updated last year
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆870Aug 20, 2024Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆642Mar 4, 2024Updated 2 years ago
- A curated list for Efficient Large Language Models☆1,959Jun 17, 2025Updated 8 months ago
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.☆3,262Sep 7, 2025Updated 5 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆454Jan 16, 2025Updated last year
- Awesome list for LLM pruning.☆288Oct 11, 2025Updated 4 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆70Jan 6, 2024Updated 2 years ago
- Awesome LLM compression research papers and tools.☆1,786Feb 23, 2026Updated last week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,612Jul 12, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,443Jul 17, 2025Updated 7 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,261Mar 27, 2024Updated last year
- Structural Pruning for LLaMA☆54May 20, 2023Updated 2 years ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,710Jun 25, 2024Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆890Nov 26, 2025Updated 3 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆368Nov 6, 2025Updated 3 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Mar 27, 2025Updated 11 months ago
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆187Jan 1, 2025Updated last year
- A framework for few-shot evaluation of language models.☆11,540Updated this week
- ☆352Apr 2, 2024Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆30Mar 28, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,002Dec 6, 2024Updated last year
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆152Updated this week
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 7 months ago
- [NeurIPS 2023] Structural Pruning for Diffusion Models☆217Jul 8, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,201Feb 20, 2026Updated last week
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆503Aug 1, 2024Updated last year
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆77Apr 29, 2024Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆712Aug 13, 2024Updated last year
- A curated list of neural network pruning resources.☆2,492Apr 4, 2024Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆321Mar 4, 2025Updated last year
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆957Jun 27, 2024Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆261Apr 23, 2024Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆198May 9, 2023Updated 2 years ago
- ☆63Dec 15, 2024Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,027Apr 11, 2025Updated 10 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆680Nov 19, 2025Updated 3 months ago
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆90Sep 13, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆817Mar 6, 2025Updated 11 months ago