Code and example data for the paper: Rule Based Rewards for Language Model Safety
☆208Jul 19, 2024Updated last year
Alternatives and similar repositories for safety-rbr-code-and-data
Users that are interested in safety-rbr-code-and-data are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆16Jul 23, 2024Updated last year
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆18Apr 15, 2025Updated 11 months ago
- Azure Command-Line Interface☆12Dec 10, 2023Updated 2 years ago
- ☆20Nov 3, 2024Updated last year
- ☆67Oct 4, 2025Updated 5 months ago
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- ☆160Nov 23, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆705Feb 16, 2026Updated last month
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆61Sep 5, 2025Updated 6 months ago
- Recipes to train reward model for RLHF.☆1,521Apr 24, 2025Updated 11 months ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Apr 28, 2024Updated last year
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 9 months ago
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated 11 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,832Jun 17, 2025Updated 9 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆137Mar 9, 2024Updated 2 years ago
- ☆27Oct 6, 2024Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Apr 20, 2024Updated last year
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆11Oct 11, 2024Updated last year
- ☆44Oct 1, 2024Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- Code for the API, workload execution, and agents underlying the LLMail-Inject Adpative Prompt Injection Challenge☆21Mar 1, 2026Updated 3 weeks ago
- ☆26Sep 5, 2024Updated last year
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆24Nov 29, 2024Updated last year
- ☆20Jun 16, 2025Updated 9 months ago
- Scalable toolkit for efficient model alignment☆850Oct 6, 2025Updated 5 months ago
- ☆36May 21, 2025Updated 10 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,106Jun 1, 2023Updated 2 years ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Feb 22, 2024Updated 2 years ago
- ☆578Jul 19, 2024Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,591Nov 24, 2025Updated 4 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆695Jan 20, 2025Updated last year
- ☆76Apr 9, 2025Updated 11 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Jan 16, 2025Updated last year
- ☆1,113Jan 10, 2026Updated 2 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,231Updated this week
- A recipe for online RLHF and online iterative DPO.☆545Dec 28, 2024Updated last year
- [ICLR 2024] This is the official implementation for the paper: "Beyond imitation: Leveraging fine-grained quality signals for alignment"☆10May 5, 2024Updated last year
- ☆58Sep 2, 2024Updated last year