SempraETY / Pruning-via-Merging
☆16Updated last month
Alternatives and similar repositories for Pruning-via-Merging:
Users that are interested in Pruning-via-Merging are comparing it to the libraries listed below
- ☆27Updated 2 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆42Updated 2 months ago
- A block pruning framework for LLMs.☆15Updated 6 months ago
- ☆49Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆51Updated 3 weeks ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆70Updated 7 months ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆20Updated 6 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆38Updated 9 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆15Updated 8 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆30Updated 6 months ago
- Codes for Merging Large Language Models☆27Updated 5 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆31Updated 7 months ago
- Official implementation for LaCo (EMNLP 2024 Findings)☆11Updated 3 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆60Updated 9 months ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆33Updated 6 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆36Updated last month
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆56Updated 3 months ago
- SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆31Updated last month
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆29Updated 6 months ago
- ☆27Updated last year
- Awesome-Low-Rank-Adaptation☆61Updated 3 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆61Updated 2 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆47Updated last month
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆18Updated 11 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆55Updated 2 months ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆32Updated last year
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆40Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆42Updated last month
- ☆13Updated 2 months ago