Unofficial implementations of block/layer-wise pruning methods for LLMs.
☆78Apr 29, 2024Updated last year
Alternatives and similar repositories for ShortGPT
Users that are interested in ShortGPT are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆39Feb 4, 2025Updated last year
- ☆23Nov 26, 2024Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Mar 27, 2025Updated 11 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆455Jan 16, 2025Updated last year
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Nov 22, 2025Updated 4 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆90Sep 13, 2024Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- A simple and effective LLM pruning approach.☆856Aug 9, 2024Updated last year
- Awesome list for LLM pruning.☆290Oct 11, 2025Updated 5 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,111Oct 7, 2024Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 8 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆70Jan 6, 2024Updated 2 years ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆877Aug 20, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆11Jul 22, 2023Updated 2 years ago
- A Model Agnostic function to directly remove specified layers from the LLM☆10May 23, 2024Updated last year
- Implementation of "Decoding-time Realignment of Language Models", ICML 2024.☆21Jun 17, 2024Updated last year
- Prune transformer layers☆74May 30, 2024Updated last year
- ☆15Sep 24, 2023Updated 2 years ago
- Modeling code for a BitNet b1.58 Llama-style model.☆25Apr 30, 2024Updated last year
- The official implement of paper 《DaMo: Data Mixing Optimizer in Fine-tuning Multimodal LLMs for Mobile Phone Agents》☆29Oct 23, 2025Updated 4 months ago
- Structural Pruning for LLaMA☆54May 20, 2023Updated 2 years ago
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆28Dec 14, 2025Updated 3 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 8 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Apr 15, 2024Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆642Mar 4, 2024Updated 2 years ago
- ThinK: Thinner Key Cache by Query-Driven Pruning☆27Feb 11, 2025Updated last year
- Towards Meta-Pruning via Optimal Transport, ICLR 2024 (Spotlight)☆18Dec 5, 2024Updated last year
- ☆32Nov 11, 2024Updated last year
- An implementation of the DISP-LLM method from the NeurIPS 2024 paper: Dimension-Independent Structural Pruning for Large Language Models.☆25Aug 6, 2025Updated 7 months ago
- ☆162Feb 15, 2025Updated last year
- TARS: MinMax Token-Adaptive Preference Strategy for Hallucination Reduction in MLLMs☆24Sep 21, 2025Updated 6 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆51Oct 21, 2023Updated 2 years ago
- Official implementation for LaCo (EMNLP 2024 Findings)☆21Oct 3, 2024Updated last year
- ☆23Mar 7, 2025Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Jul 24, 2024Updated last year
- ☆14Jun 25, 2025Updated 8 months ago
- Official implementation of the ICLR 2024 paper AffineQuant☆28Mar 30, 2024Updated last year