☆321Sep 18, 2024Updated last year
Alternatives and similar repositories for Eurus
Users that are interested in Eurus are comparing it to the libraries listed below
Sorting:
- Repo of paper "Free Process Rewards without Process Labels"☆169Mar 14, 2025Updated 11 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆459Apr 18, 2024Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 9 months ago
- ☆342Jun 5, 2025Updated 8 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Sep 9, 2024Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆391Jan 19, 2025Updated last year
- Scalable RL solution for advanced reasoning of language models☆1,806Mar 18, 2025Updated 11 months ago
- A recipe for online RLHF and online iterative DPO.☆539Dec 28, 2024Updated last year
- A library for advanced large language model reasoning☆2,333Jun 10, 2025Updated 8 months ago
- Code for Quiet-STaR☆741Aug 21, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆696Feb 16, 2026Updated last week
- Official repository for ORPO☆471May 31, 2024Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆231Aug 2, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,515Apr 24, 2025Updated 10 months ago
- Directional Preference Alignment☆58Sep 23, 2024Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆588Dec 9, 2024Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆692Jan 20, 2025Updated last year
- ☆331May 31, 2025Updated 8 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- AllenAI's post-training codebase☆3,592Updated this week
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆512Oct 20, 2024Updated last year
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆454Feb 1, 2024Updated 2 years ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆329Jan 29, 2026Updated last month
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Jul 23, 2024Updated last year
- ☆552Jan 2, 2025Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆446Oct 16, 2024Updated last year
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,833Jan 17, 2025Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆632Jul 29, 2025Updated 6 months ago
- Muon is Scalable for LLM Training☆1,437Aug 3, 2025Updated 6 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Updated this week
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆144Feb 24, 2025Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆202Apr 17, 2025Updated 10 months ago
- Collection of papers for scalable automated alignment.☆93Oct 22, 2024Updated last year
- ☆242Aug 14, 2024Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,532Feb 13, 2026Updated 2 weeks ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆120Dec 10, 2024Updated last year
- GenRM-CoT: Data release for verification rationales☆68Oct 16, 2024Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,235May 8, 2024Updated last year