liangyuwang / zo2Links
ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]
☆200Updated 6 months ago
Alternatives and similar repositories for zo2
Users that are interested in zo2 are comparing it to the libraries listed below
Sorting:
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆147Updated 10 months ago
- ☆112Updated 7 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆172Updated 3 months ago
- 青稞Talk☆190Updated 3 weeks ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆86Updated this week
- (ICLR 2026) Unveiling Super Experts in Mixture-of-Experts Large Language Models☆35Updated 4 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆171Updated 2 weeks ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆331Updated 8 months ago
- ☆209Updated 3 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆164Updated 4 months ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆72Updated 10 months ago
- [ICML2025] The official implementation of "C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like Retrieval-Augmented Gene…☆41Updated 9 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆121Updated 8 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆520Updated this week
- ☆82Updated 10 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆187Updated 2 years ago
- Extrapolating RLVR to General Domains without Verifiers☆200Updated 6 months ago
- a toolkit on knowledge distillation for large language models☆266Updated last week
- ☆182Updated 9 months ago
- MiroRL is an MCP-first reinforcement learning framework for deep research agent.☆229Updated 5 months ago
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆84Updated 3 months ago
- ☆240Updated last week
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆255Updated 6 months ago
- [ACL 2025] An official pytorch implement of the paper: Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement☆40Updated 8 months ago
- ☆125Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆223Updated 6 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆155Updated last month
- ICML2025: Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning☆52Updated 9 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆70Updated 2 months ago
- One-shot Entropy Minimization☆188Updated 7 months ago