biomedical-cybernetics / Relative-importance-and-activation-pruningLinks
☆46Updated last year
Alternatives and similar repositories for Relative-importance-and-activation-pruning
Users that are interested in Relative-importance-and-activation-pruning are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆64Updated 3 months ago
- ☆58Updated last year
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆56Updated last year
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆96Updated 7 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆164Updated 9 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆43Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆74Updated last week
- [ICLR 2025] The official pytorch implement of "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆19Updated 4 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆74Updated 8 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- Awesome list for LLM pruning.☆239Updated 7 months ago
- ☆10Updated last year
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆20Updated 2 weeks ago
- ☆20Updated last year
- ☆24Updated 2 months ago
- Code Repository of Evaluating Quantized Large Language Models☆130Updated 10 months ago
- Official implementation for LaCo (EMNLP 2024 Findings)☆17Updated 9 months ago
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆34Updated 3 months ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆190Updated 2 years ago
- ☆43Updated 8 months ago
- ☆56Updated 7 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆41Updated last year
- ☆18Updated 7 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆121Updated 2 years ago
- ☆28Updated 11 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆285Updated 2 months ago
- Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"☆30Updated 2 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆142Updated last month
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆37Updated 11 months ago