bbartoldson / TBALinks
Official implementation of TBA for async LLM post-training.
☆20Updated 3 months ago
Alternatives and similar repositories for TBA
Users that are interested in TBA are comparing it to the libraries listed below
Sorting:
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆63Updated 5 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆151Updated this week
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆65Updated 7 months ago
- GenRM-CoT: Data release for verification rationales☆65Updated 11 months ago
- Long Context Extension and Generalization in LLMs☆60Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆100Updated 2 months ago
- Replicating O1 inference-time scaling laws☆90Updated 10 months ago
- ☆39Updated 6 months ago
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆129Updated last month
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆172Updated 4 months ago
- ☆54Updated 3 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆80Updated 2 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆34Updated last month
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆51Updated 11 months ago
- ☆123Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Async pipelined version of Verl☆117Updated 5 months ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated last year
- ☆104Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆57Updated last year
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆53Updated 8 months ago
- The evaluation framework for training-free sparse attention in LLMs☆98Updated 3 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆107Updated last month
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆228Updated 3 weeks ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆74Updated 10 months ago
- ☆72Updated last year