openai / safety-rbr-code-and-data
Code and example data for the paper: Rule Based Rewards for Language Model Safety
☆185Updated 8 months ago
Alternatives and similar repositories for safety-rbr-code-and-data:
Users that are interested in safety-rbr-code-and-data are comparing it to the libraries listed below
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆135Updated 2 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆131Updated 6 months ago
- ☆96Updated 9 months ago
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 2 months ago
- ☆148Updated 4 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆140Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆221Updated 5 months ago
- ☆151Updated 3 weeks ago
- ☆105Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 7 months ago
- Self-Alignment with Principle-Following Reward Models☆158Updated last year
- Replicating O1 inference-time scaling laws☆83Updated 4 months ago
- ☆278Updated last month
- ☆70Updated 5 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆180Updated last month
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆174Updated last month
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆175Updated this week
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated last month
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆124Updated last month
- Reproducible, flexible LLM evaluations☆189Updated 3 weeks ago
- ☆165Updated last month
- ☆184Updated last month
- Conifer: Improving Complex Constrained Instruction-Following Ability of Large Language Models☆88Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆139Updated 5 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆54Updated 6 months ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆62Updated this week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆114Updated 3 weeks ago
- Critique-out-Loud Reward Models☆57Updated 6 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆185Updated 8 months ago