Code and example data for the paper: Rule Based Rewards for Language Model Safety
☆208Jul 19, 2024Updated last year
Alternatives and similar repositories for safety-rbr-code-and-data
Users that are interested in safety-rbr-code-and-data are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆16Jul 23, 2024Updated last year
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆17Apr 15, 2025Updated last year
- Azure Command-Line Interface☆15Mar 26, 2026Updated last month
- ☆20Nov 3, 2024Updated last year
- ☆78Oct 4, 2025Updated 7 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Improving Alignment and Robustness with Circuit Breakers☆261Sep 24, 2024Updated last year
- ☆163Nov 23, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆713Feb 16, 2026Updated 2 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆65Sep 5, 2025Updated 8 months ago
- Recipes to train reward model for RLHF.☆1,531Apr 24, 2025Updated last year
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Apr 28, 2024Updated 2 years ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 10 months ago
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,840Jun 17, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆27Oct 6, 2024Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆143Mar 9, 2024Updated 2 years ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Apr 20, 2024Updated 2 years ago
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆11Oct 11, 2024Updated last year
- ☆44Oct 1, 2024Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆936Aug 16, 2024Updated last year
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆25Nov 29, 2024Updated last year
- ☆21Jun 16, 2025Updated 10 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆28Sep 5, 2024Updated last year
- Scalable toolkit for efficient model alignment☆853Oct 6, 2025Updated 7 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆71Feb 22, 2024Updated 2 years ago
- Code for the API, workload execution, and agents underlying the LLMail-Inject Adpative Prompt Injection Challenge☆23Apr 9, 2026Updated 3 weeks ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,126Jun 1, 2023Updated 2 years ago
- ☆585Jul 19, 2024Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,600Nov 24, 2025Updated 5 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆702Jan 20, 2025Updated last year
- ☆39May 21, 2025Updated 11 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆77Apr 9, 2025Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆193Jan 16, 2025Updated last year
- ☆1,135Jan 10, 2026Updated 3 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,441Updated this week
- A recipe for online RLHF and online iterative DPO.☆544Dec 28, 2024Updated last year
- [ICLR 2024] This is the official implementation for the paper: "Beyond imitation: Leveraging fine-grained quality signals for alignment"☆10May 5, 2024Updated 2 years ago
- ☆59Sep 2, 2024Updated last year