yangyifei729 / LaCo
Official implementation for LaCo (EMNLP 2024 Findings)
☆15Updated 5 months ago
Alternatives and similar repositories for LaCo:
Users that are interested in LaCo are comparing it to the libraries listed below
- ☆17Updated 3 months ago
- A block pruning framework for LLMs.☆19Updated 8 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆43Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆33Updated 9 months ago
- ☆47Updated 2 months ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆35Updated last month
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆62Updated 10 months ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆18Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆51Updated last month
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆45Updated 11 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆144Updated 5 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆64Updated 10 months ago
- Are gradient information useful for pruning of LLMs?☆43Updated 10 months ago
- ☆50Updated last year
- Awesome-Low-Rank-Adaptation☆83Updated 4 months ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆21Updated 11 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆74Updated 9 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆75Updated 3 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆58Updated 8 months ago
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆67Updated last month
- Official PyTorch implementation of IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact☆42Updated 9 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆30Updated this week
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆40Updated 2 weeks ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆49Updated last week
- Awesome list for LLM pruning.☆209Updated 2 months ago
- ☆19Updated 9 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆104Updated 4 months ago
- ☆38Updated 3 months ago