openai / safety-rbr-code-and-dataLinks
Code and example data for the paper: Rule Based Rewards for Language Model Safety
☆190Updated last year
Alternatives and similar repositories for safety-rbr-code-and-data
Users that are interested in safety-rbr-code-and-data are comparing it to the libraries listed below
Sorting:
- ☆99Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆169Updated 3 weeks ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆160Updated 4 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆166Updated 2 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆139Updated 10 months ago
- ☆187Updated 3 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆233Updated 9 months ago
- Self-Alignment with Principle-Following Reward Models☆162Updated 2 months ago
- ☆91Updated 8 months ago
- ☆114Updated 6 months ago
- Reproducible, flexible LLM evaluations☆226Updated 3 weeks ago
- ☆135Updated 8 months ago
- ☆309Updated 2 months ago
- ☆203Updated 4 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆113Updated last year
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆159Updated last week
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆61Updated 7 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆233Updated 2 months ago
- A simple unified framework for evaluating LLMs☆235Updated 3 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆137Updated 5 months ago
- ☆186Updated 2 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆112Updated 2 weeks ago
- RL Scaling and Test-Time Scaling (ICML'25)☆109Updated 6 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆60Updated 5 months ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆139Updated 8 months ago
- Replicating O1 inference-time scaling laws☆89Updated 8 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆108Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆56Updated 10 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 2 months ago