RUCKBReasoning / LLM-StreamlineLinks
Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"
☆29Updated last month
Alternatives and similar repositories for LLM-Streamline
Users that are interested in LLM-Streamline are comparing it to the libraries listed below
Sorting:
- Official implementation for LaCo (EMNLP 2024 Findings)☆17Updated 8 months ago
- A block pruning framework for LLMs.☆23Updated last month
- ☆18Updated 7 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆19Updated 3 months ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆13Updated 2 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆47Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆61Updated 2 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆54Updated last year
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆156Updated 3 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆93Updated 6 months ago
- ☆46Updated last year
- [arXiv 2025] Efficient Reasoning Models: A Survey☆184Updated this week
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆161Updated 8 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆39Updated last year
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆54Updated 2 months ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆37Updated 4 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆110Updated 4 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆18Updated last month
- ☆18Updated 3 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆52Updated 10 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆71Updated 8 months ago
- ☆24Updated last month
- Awesome-Low-Rank-Adaptation☆104Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆73Updated 4 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆73Updated this week
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [arXiv '25]☆39Updated last month
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆79Updated 5 months ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆19Updated last year
- ☆26Updated last year