openai / safety-rbr-code-and-dataLinks
Code and example data for the paper: Rule Based Rewards for Language Model Safety
☆203Updated last year
Alternatives and similar repositories for safety-rbr-code-and-data
Users that are interested in safety-rbr-code-and-data are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- ☆61Updated 8 months ago
- ☆108Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- ☆100Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆59Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- ☆220Updated 9 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆181Updated 7 months ago
- Self-Alignment with Principle-Following Reward Models☆169Updated 3 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆114Updated this week
- ☆202Updated 8 months ago
- ☆117Updated 11 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆254Updated 8 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆124Updated last year
- ☆213Updated 10 months ago
- ☆329Updated 7 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆223Updated 7 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆210Updated last month
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 11 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Updated 11 months ago
- ☆110Updated 8 months ago
- ☆160Updated last year
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆143Updated 10 months ago
- Reproducible, flexible LLM evaluations☆325Updated last month
- A brief and partial summary of RLHF algorithms.☆142Updated 10 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆63Updated last year