OpenGVLab / LLMPrune-BESALinks
BESA is a differentiable weight pruning technique for large language models.
☆17Updated last year
Alternatives and similar repositories for LLMPrune-BESA
Users that are interested in LLMPrune-BESA are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Search for Efficient LLMs☆15Updated 11 months ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 3 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Updated last year
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆23Updated 9 months ago
- Are gradient information useful for pruning of LLMs?☆47Updated 3 months ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Updated 10 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- [Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huan…☆20Updated 2 years ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Updated last year
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆35Updated last year
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 9 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆65Updated last year
- ☆62Updated 2 years ago
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆32Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- ☆24Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆38Updated last year
- ☆28Updated 2 years ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Updated last year
- Benchmarking Attention Mechanism in Vision Transformers.☆19Updated 3 years ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Updated 8 months ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆40Updated 2 years ago
- Code to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging"☆12Updated 2 months ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Updated last year