snudm-starlab / K-pruneLinks
Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models (ICLR 2024)
☆14Updated 5 months ago
Alternatives and similar repositories for K-prune
Users that are interested in K-prune are comparing it to the libraries listed below
Sorting:
- SynQ: Accurate Zero-shot Quantization by Synthesis-aware Fine-tuning (ICLR 2025)☆28Updated 9 months ago
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆88Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆67Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆121Updated 4 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Updated 7 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆45Updated last year
- ☆47Updated last year
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆49Updated last year
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆37Updated 9 months ago
- ☆53Updated last year
- ☆28Updated 8 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆132Updated 3 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Updated last year
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆47Updated 3 weeks ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- Sturctured pruning algorithm for pruning Transformer☆31Updated last year
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆28Updated 8 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆63Updated last year
- [ICLR 2025] DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion Models☆17Updated 7 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆16Updated 9 months ago
- ☆10Updated last year
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆47Updated last year
- ☆38Updated last year
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆45Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆54Updated 11 months ago
- LLM Inference with Microscaling Format☆32Updated last year
- ☆52Updated last year
- ☆61Updated 2 years ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆164Updated 2 weeks ago
- ☆23Updated last year