openai / safety-rbr-code-and-dataLinks
Code and example data for the paper: Rule Based Rewards for Language Model Safety
☆193Updated last year
Alternatives and similar repositories for safety-rbr-code-and-data
Users that are interested in safety-rbr-code-and-data are comparing it to the libraries listed below
Sorting:
- ☆187Updated 4 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆161Updated 5 months ago
- ☆100Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆170Updated last month
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated 11 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 11 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆237Updated 9 months ago
- ☆91Updated 9 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆111Updated 7 months ago
- ☆204Updated 4 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆234Updated 3 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆170Updated 3 months ago
- Self-Alignment with Principle-Following Reward Models☆163Updated 3 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆164Updated last month
- ☆135Updated 9 months ago
- ☆50Updated 3 months ago
- ☆312Updated 2 months ago
- ☆206Updated 6 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆112Updated last week
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆106Updated 6 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains☆163Updated 2 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆146Updated 9 months ago
- Reproducible, flexible LLM evaluations☆237Updated last month
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆114Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 3 months ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆141Updated 8 months ago
- ☆200Updated 2 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆184Updated 7 months ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆244Updated 2 months ago
- ☆148Updated 9 months ago