yangyifei729 / LaCo
Official implementation for LaCo (EMNLP 2024 Findings)
☆16Updated 7 months ago
Alternatives and similar repositories for LaCo:
Users that are interested in LaCo are comparing it to the libraries listed below
- ☆18Updated 5 months ago
- A block pruning framework for LLMs.☆22Updated 10 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆49Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆39Updated 11 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆16Updated last month
- Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"☆25Updated last week
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆58Updated last month
- ☆51Updated last year
- ☆50Updated 4 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆47Updated last year
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆36Updated 3 months ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆19Updated last year
- ☆40Updated 5 months ago
- ☆15Updated 6 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆67Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆47Updated 2 months ago
- Codes for Merging Large Language Models☆29Updated 9 months ago
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆42Updated 6 months ago
- ☆13Updated 6 months ago
- ☆23Updated 11 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆136Updated last month
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [arXiv '25]☆33Updated 3 weeks ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆12Updated 3 weeks ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆85Updated 5 months ago
- ☆49Updated 11 months ago
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆23Updated last month
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆65Updated 10 months ago
- Official Pytorch Implementation of "OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning" b…☆31Updated 11 months ago
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆72Updated 3 months ago