openai / safety-rbr-code-and-dataLinks
Code and example data for the paper: Rule Based Rewards for Language Model Safety
☆205Updated last year
Alternatives and similar repositories for safety-rbr-code-and-data
Users that are interested in safety-rbr-code-and-data are comparing it to the libraries listed below
Sorting:
- ☆99Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆260Updated 9 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆114Updated this week
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- ☆224Updated 10 months ago
- ☆328Updated 8 months ago
- ☆62Updated 8 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- ☆107Updated last year
- ☆117Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- ☆203Updated 9 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated last year
- ☆214Updated 11 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆116Updated last week
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆143Updated 11 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆71Updated 11 months ago
- ☆160Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆225Updated 7 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Updated last year
- ☆140Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 10 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆182Updated 6 months ago
- Critique-out-Loud Reward Models☆73Updated last year
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆214Updated 2 months ago