liyunqianggyn / Awesome-LLMs-PruningLinks
Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.
☆96Updated 7 months ago
Alternatives and similar repositories for Awesome-LLMs-Pruning
Users that are interested in Awesome-LLMs-Pruning are comparing it to the libraries listed below
Sorting:
- Awesome list for LLM pruning.☆239Updated 7 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆56Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆164Updated 9 months ago
- ☆46Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆64Updated 3 months ago
- Awesome list for LLM quantization☆251Updated last month
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆285Updated 2 months ago
- ☆43Updated 8 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆43Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆74Updated last week
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆194Updated 5 months ago
- Code Repository of Evaluating Quantized Large Language Models☆130Updated 10 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆74Updated 8 months ago
- Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"☆30Updated 2 months ago
- Survey Paper List - Efficient LLM and Foundation Models☆252Updated 9 months ago
- ☆24Updated 2 months ago
- Official Implementation of "Learning Harmonized Representations for Speculative Sampling" (HASS)☆42Updated 4 months ago
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆45Updated 8 months ago
- ☆56Updated 7 months ago
- Official implementation for LaCo (EMNLP 2024 Findings)☆17Updated 9 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆311Updated 5 months ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆13Updated 3 months ago
- ☆58Updated last year
- ☆18Updated 7 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆93Updated 3 months ago
- ☆223Updated last year
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆481Updated 2 weeks ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆36Updated 5 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year