biomedical-cybernetics / Relative-importance-and-activation-pruningView external linksLinks
☆56Jun 10, 2024Updated last year
Alternatives and similar repositories for Relative-importance-and-activation-pruning
Users that are interested in Relative-importance-and-activation-pruning are comparing it to the libraries listed below
Sorting:
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated last year
- An implementation of the DISP-LLM method from the NeurIPS 2024 paper: Dimension-Independent Structural Pruning for Large Language Models.☆23Aug 6, 2025Updated 6 months ago
- ☆30Jul 22, 2024Updated last year
- ☆14Feb 2, 2026Updated 2 weeks ago
- ☆63Oct 17, 2023Updated 2 years ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Apr 9, 2024Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Nov 25, 2024Updated last year
- Awesome list for LLM pruning.☆283Oct 11, 2025Updated 4 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Mar 27, 2025Updated 10 months ago
- ☆23Nov 26, 2024Updated last year
- A simple and effective LLM pruning approach.☆848Aug 9, 2024Updated last year
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆69Jan 6, 2024Updated 2 years ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆26Feb 26, 2025Updated 11 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆32Sep 21, 2025Updated 4 months ago
- ☆143Jul 21, 2024Updated last year
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆27Apr 21, 2025Updated 9 months ago
- Are gradient information useful for pruning of LLMs?☆47Aug 23, 2025Updated 5 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆82Jul 7, 2025Updated 7 months ago
- Official implementation for "Pruning Large Language Models with Semi-Structural Adaptive Sparse Training" (AAAI 2025)☆18Jul 1, 2025Updated 7 months ago
- [ICLR25] STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs☆18Jun 3, 2025Updated 8 months ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆17Apr 16, 2025Updated 10 months ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆57Sep 7, 2023Updated 2 years ago
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆17Oct 17, 2023Updated 2 years ago
- Structured Neuron Level Pruning to compress Transformer-based models [ECCV'24]☆17Aug 7, 2024Updated last year
- ☆63Dec 15, 2024Updated last year
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆871Aug 20, 2024Updated last year
- ☆35May 24, 2024Updated last year
- MATLAB code for performing the coalescent embedding☆13May 23, 2023Updated 2 years ago
- [NeurIPS 2024] Search for Efficient LLMs☆16Jan 16, 2025Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Jan 15, 2024Updated 2 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated 9 months ago
- [DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive La…☆82Jun 30, 2024Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- This repository contains code for the MicroAdam paper.☆22Dec 14, 2024Updated last year
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆19Jul 12, 2023Updated 2 years ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆89Oct 22, 2024Updated last year
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Sep 4, 2024Updated last year
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆22Nov 15, 2020Updated 5 years ago