biomedical-cybernetics / Relative-importance-and-activation-pruning
☆40Updated 9 months ago
Alternatives and similar repositories for Relative-importance-and-activation-pruning:
Users that are interested in Relative-importance-and-activation-pruning are comparing it to the libraries listed below
- ☆50Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆61Updated 9 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆46Updated last year
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆79Updated 4 months ago
- ☆50Updated 10 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆53Updated last week
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆46Updated 4 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆46Updated last year
- Awesome list for LLM pruning.☆218Updated 3 months ago
- Code Repository of Evaluating Quantized Large Language Models☆121Updated 7 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆63Updated 11 months ago
- ☆21Updated 4 months ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆96Updated 9 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆61Updated 5 months ago
- ☆18Updated last year
- [NeurIPS 2024 Oral 🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆157Updated 6 months ago
- ☆24Updated 8 months ago
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆69Updated last month
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆35Updated 9 months ago
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆42Updated 5 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆35Updated 7 months ago
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆19Updated 10 months ago
- ☆17Updated 4 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆33Updated 6 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆57Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆19Updated last year
- ☆48Updated 3 months ago
- ☆47Updated 4 months ago
- ☆43Updated 5 months ago