openai / safety-rbr-code-and-dataLinks
Code and example data for the paper: Rule Based Rewards for Language Model Safety
☆202Updated last year
Alternatives and similar repositories for safety-rbr-code-and-data
Users that are interested in safety-rbr-code-and-data are comparing it to the libraries listed below
Sorting:
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 4 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆249Updated 6 months ago
- ☆103Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆165Updated 8 months ago
- ☆100Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆111Updated 3 weeks ago
- ☆215Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- ☆197Updated 6 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆244Updated last year
- ☆326Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆176Updated 5 months ago
- Self-Alignment with Principle-Following Reward Models☆169Updated last month
- Reproducible, flexible LLM evaluations☆264Updated 2 weeks ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆143Updated last year
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆307Updated this week
- ☆212Updated 8 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆141Updated 8 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 9 months ago
- Critique-out-Loud Reward Models☆70Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆147Updated 11 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆190Updated 9 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆67Updated 8 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆219Updated 5 months ago
- ☆154Updated 11 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆120Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆58Updated last year
- ☆210Updated 5 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆116Updated 11 months ago
- A brief and partial summary of RLHF algorithms.☆136Updated 8 months ago