fmfi-compbio / admm-pruning
☆23Updated 7 months ago
Alternatives and similar repositories for admm-pruning:
Users that are interested in admm-pruning are comparing it to the libraries listed below
- ☆24Updated 4 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆58Updated 4 months ago
- ☆27Updated 11 months ago
- ☆50Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆32Updated 5 months ago
- AFPQ code implementation☆20Updated last year
- ACL 2023☆39Updated last year
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆44Updated 3 months ago
- ☆35Updated 4 months ago
- ☆18Updated 4 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆26Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆46Updated last year
- Low-Rank Llama Custom Training☆22Updated 11 months ago
- BESA is a differentiable weight pruning technique for large language models.☆14Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- ☆15Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆47Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆29Updated 2 weeks ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆63Updated 11 months ago
- LLM Inference with Microscaling Format☆20Updated 4 months ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Updated last year
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆35Updated last month
- Code for ICML 2021 submission☆35Updated 3 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 9 months ago
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆47Updated last year
- ☆64Updated last month
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆27Updated 11 months ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆80Updated 3 months ago