sail-sg / oatLinks
🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.
☆433Updated last week
Alternatives and similar repositories for oat
Users that are interested in oat are comparing it to the libraries listed below
Sorting:
- Reproducible, flexible LLM evaluations☆237Updated last month
- ☆312Updated 2 months ago
- A simple unified framework for evaluating LLMs☆240Updated 4 months ago
- ☆187Updated 4 months ago
- official repository for “Reinforcement Learning for Reasoning in Large Language Models with One Training Example”☆342Updated last week
- RewardBench: the first evaluation tool for reward models.☆624Updated 2 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆254Updated 3 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆738Updated this week
- A version of verl to support tool use☆333Updated this week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆244Updated 4 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆234Updated 3 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆345Updated last month
- ☆204Updated 4 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆245Updated last week
- The HELMET Benchmark☆165Updated last week
- ☆206Updated 6 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆248Updated 3 months ago
- A project to improve skills of large language models☆529Updated this week
- Repo of paper "Free Process Rewards without Process Labels"☆161Updated 5 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆241Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆429Updated last year
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,068Updated 3 weeks ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆244Updated 2 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆344Updated 8 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆101Updated 2 weeks ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆279Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆193Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆219Updated 5 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆167Updated 2 months ago
- Tina: Tiny Reasoning Models via LoRA☆275Updated last week