[ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs
☆98Nov 25, 2024Updated last year
Alternatives and similar repositories for Pruner-Zero
Users that are interested in Pruner-Zero are comparing it to the libraries listed below
Sorting:
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆72Mar 25, 2025Updated 11 months ago
- Are gradient information useful for pruning of LLMs?☆47Aug 23, 2025Updated 6 months ago
- Awesome list for LLM pruning.☆288Oct 11, 2025Updated 4 months ago
- [NeurIPS 2024] Search for Efficient LLMs☆16Jan 16, 2025Updated last year
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Apr 25, 2023Updated 2 years ago
- ☆57Jun 10, 2024Updated last year
- A block pruning framework for LLMs.☆28May 17, 2025Updated 9 months ago
- ☆12Oct 9, 2023Updated 2 years ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆70Jan 6, 2024Updated 2 years ago
- ☆23Nov 26, 2024Updated last year
- ☆12Sep 1, 2023Updated 2 years ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 3 months ago
- [ICLR25] STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs☆18Jun 3, 2025Updated 9 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆60Mar 25, 2025Updated 11 months ago
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 7 months ago
- ☆23Jan 21, 2024Updated 2 years ago
- ☆63Dec 15, 2024Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated last year
- Auto-Prox-AAAI24☆14Apr 30, 2024Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆47Jun 4, 2024Updated last year
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Jun 4, 2024Updated last year
- A simple and effective LLM pruning approach.☆849Aug 9, 2024Updated last year
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Oct 21, 2024Updated last year
- ☆29Nov 29, 2023Updated 2 years ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Dec 6, 2023Updated 2 years ago
- The official implementation of the DAC 2024 paper GQA-LUT☆20Dec 20, 2024Updated last year
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆32Nov 28, 2025Updated 3 months ago
- [ICRA 2025]Robust Self-Reconfiguration for Fault-Tolerant Control of Modular Aerial Robot Systems☆25Jun 9, 2025Updated 8 months ago
- ☆20Feb 26, 2021Updated 5 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆23Mar 16, 2025Updated 11 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆155Jan 14, 2026Updated last month
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆38Sep 24, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated 10 months ago