step-law / steplawLinks
☆188Updated last month
Alternatives and similar repositories for steplaw
Users that are interested in steplaw are comparing it to the libraries listed below
Sorting:
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆299Updated last month
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆179Updated 2 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆127Updated last month
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆230Updated this week
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆203Updated 3 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆166Updated last week
- ☆198Updated 7 months ago
- VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework☆339Updated 3 weeks ago
- ☆201Updated 3 months ago
- The related works and background techniques about Openai o1☆221Updated 4 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆239Updated last month
- A Comprehensive Survey on Long Context Language Modeling☆147Updated 2 weeks ago
- ☆150Updated last month
- ☆269Updated last week
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆205Updated this week
- ☆404Updated this week
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆100Updated this week
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆125Updated last month
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆345Updated 2 weeks ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models☆126Updated last week
- qwen-nsa☆66Updated last month
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆182Updated this week
- A flexible and efficient training framework for large-scale alignment tasks☆364Updated this week
- ☆63Updated 6 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆368Updated 4 months ago
- ☆208Updated last week
- a-m-team's exploration in large language modeling☆128Updated last week
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆213Updated 2 weeks ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆506Updated last week
- ☆319Updated 10 months ago