snudm-starlab / K-pruneLinks
Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models (ICLR 2024)
☆13Updated last month
Alternatives and similar repositories for K-prune
Users that are interested in K-prune are comparing it to the libraries listed below
Sorting:
- SynQ: Accurate Zero-shot Quantization by Synthesis-aware Fine-tuning (ICLR 2025)☆28Updated 5 months ago
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆84Updated 10 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆63Updated last year
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆36Updated 5 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- ☆41Updated 8 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆64Updated 3 months ago
- Sturctured pruning algorithm for pruning Transformer☆31Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆111Updated 2 weeks ago
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆36Updated 3 months ago
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆20Updated last year
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆96Updated 7 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆56Updated last year
- SensiMix: Sensitivity-Aware 8-bit Index & 1-bit Value Mixed Precision Quantization for BERT Compression (PLOS One)☆34Updated 3 years ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆18Updated last month
- ☆27Updated 4 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆72Updated last year
- ☆23Updated 3 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆46Updated 4 months ago
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆43Updated last year
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆34Updated 11 months ago
- Official Implementation of FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation☆21Updated last month
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆14Updated 7 months ago
- Falcon: Lightweight and Accurate Convolution Based on Depthwise Separable Convolution (KAIS)☆44Updated 11 months ago
- ☆24Updated 2 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆74Updated 8 months ago
- [ICLR'25] Code for KaSA, an official implementation of "KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models"☆18Updated 6 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆65Updated this week