Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"
☆31Mar 28, 2024Updated 2 years ago
Alternatives and similar repositories for Bonsai
Users that are interested in Bonsai are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Structured Pruning Adapters in PyTorch☆19Aug 30, 2023Updated 2 years ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆47Jun 4, 2024Updated last year
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆23Apr 24, 2024Updated 2 years ago
- [ICLR 2026] dParallel: Learnable Parallel Decoding for dLLMs☆61Apr 12, 2026Updated 3 weeks ago
- Listwise Reward Estimation for Offline Preference-based Reinforcement Learning (ICML 2024)☆18Jun 18, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆92Oct 22, 2024Updated last year
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆27Apr 21, 2025Updated last year
- This repository contains the official implementation for the ECCV'22 paper, "SPIN: An Empirical Evaluation on Sharing Parameters of Isotr…☆20Sep 9, 2023Updated 2 years ago
- For releasing code related to compression methods for transformers, accompanying our publications☆461Jan 16, 2025Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆643Mar 4, 2024Updated 2 years ago
- ☆30Jul 22, 2024Updated last year
- ☆63Dec 15, 2024Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,123Oct 7, 2024Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆129May 22, 2025Updated 11 months ago
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆41Feb 4, 2025Updated last year
- Code for MLSys 2024 Paper "SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models"☆22Apr 13, 2024Updated 2 years ago
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆17Oct 17, 2023Updated 2 years ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Apr 24, 2026Updated last week
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆156Mar 21, 2025Updated last year
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆25Nov 14, 2023Updated 2 years ago
- The framework to prune LLMs to any size and any config.☆96Mar 1, 2024Updated 2 years ago
- Fine-tuning Quantized Neural Networks with Zeroth-order Optimization☆19Sep 17, 2025Updated 7 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Awesome list for LLM pruning.☆291Oct 11, 2025Updated 6 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆18Mar 21, 2023Updated 3 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated last year
- Elevating Chess Strategy with Fine-Tuned Large Language Model☆17Dec 8, 2023Updated 2 years ago
- ☆17Jul 22, 2025Updated 9 months ago
- An alternative UI for playing with ChatGPT☆21Mar 14, 2023Updated 3 years ago
- DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails☆33Feb 26, 2025Updated last year
- Knowledge transfer from high-resource to low-resource programming languages for Code LLMs☆16Aug 12, 2025Updated 8 months ago
- NDL古典籍OCR学習用データセット(みんなで翻刻加工データ)☆20Mar 13, 2026Updated last month
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆14Apr 16, 2024Updated 2 years ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆526Oct 20, 2024Updated last year
- Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation…☆33Aug 4, 2023Updated 2 years ago
- ☆11Jan 13, 2026Updated 3 months ago
- A simple and effective LLM pruning approach.☆863Aug 9, 2024Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆28Feb 17, 2025Updated last year