[ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".
☆126Jul 6, 2025Updated 9 months ago
Alternatives and similar repositories for ZO-LLM
Users that are interested in ZO-LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆71Oct 9, 2024Updated last year
- Second-Order Fine-Tuning without Pain for LLMs: a Hessian Informed Zeroth-Order Optimizer☆24Feb 11, 2025Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,160Jan 11, 2024Updated 2 years ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Apr 4, 2024Updated 2 years ago
- Code the ICML 2024 paper: "Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models"☆12Jun 25, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆10Jun 19, 2023Updated 2 years ago
- Fine-tuning Quantized Neural Networks with Zeroth-order Optimization☆18Sep 17, 2025Updated 6 months ago
- [EMNLP 24] Source code for paper 'AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tu…☆13Dec 15, 2024Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆33Jul 1, 2025Updated 9 months ago
- ☆19Dec 5, 2024Updated last year
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆39Nov 1, 2024Updated last year
- Official implementation of ICML'24 paper "LQER: Low-Rank Quantization Error Reconstruction for LLMs"☆19Jul 11, 2024Updated last year
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Apr 25, 2023Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆25Feb 22, 2024Updated 2 years ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆19Apr 16, 2025Updated 11 months ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆56Sep 7, 2023Updated 2 years ago
- ☆14May 4, 2024Updated last year
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆157Jan 14, 2026Updated 3 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆47Jun 4, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆271Nov 8, 2022Updated 3 years ago
- [ICML 2024] DPZero: Private Fine-Tuning of Language Models without Backpropagation☆17Sep 4, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆18Aug 19, 2024Updated last year
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing☆21Mar 18, 2025Updated last year
- ☆153Sep 9, 2024Updated last year
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Oct 30, 2025Updated 5 months ago
- This repo contains papers, books, tutorials and resources on Riemannian optimization.☆58Mar 18, 2026Updated 3 weeks ago
- Implementation for PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs☆24Jun 6, 2024Updated last year
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆13Jun 7, 2023Updated 2 years ago
- Kinetics: Rethinking Test-Time Scaling Laws☆87Jul 11, 2025Updated 9 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Official implementation of NeurIPS'24 Spotlight paper "Monte Carlo Tree Search based Space Transfer for Black-box Optimization".☆13Nov 28, 2024Updated last year
- ☆15Sep 24, 2023Updated 2 years ago
- ☆34Nov 11, 2025Updated 5 months ago
- [ICLR 2025] Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization☆28Jan 27, 2026Updated 2 months ago
- ☆36May 28, 2024Updated last year
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Aug 16, 2024Updated last year
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆285Mar 15, 2025Updated last year