ethz-spylab / rlhf_trojan_competitionView external linksLinks
Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.
☆116Jun 13, 2024Updated last year
Alternatives and similar repositories for rlhf_trojan_competition
Users that are interested in rlhf_trojan_competition are comparing it to the libraries listed below
Sorting:
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Apr 24, 2024Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆57Aug 17, 2024Updated last year
- ☆19Feb 25, 2024Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Jul 3, 2021Updated 4 years ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- ☆60Mar 9, 2023Updated 2 years ago
- ☆35Sep 13, 2023Updated 2 years ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Apr 28, 2024Updated last year
- Code for T-MARS data filtering☆35Aug 23, 2023Updated 2 years ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆84Jul 24, 2025Updated 6 months ago
- This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Lang…☆21Jul 3, 2024Updated last year
- ☆70Feb 4, 2024Updated 2 years ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆129Mar 9, 2024Updated last year
- ☆20Feb 11, 2024Updated 2 years ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆88Mar 15, 2024Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆173Feb 20, 2024Updated last year
- ☆24Jul 25, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆220Dec 10, 2024Updated last year
- ☆25Nov 11, 2025Updated 3 months ago
- ☆12Jul 12, 2024Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆90May 2, 2025Updated 9 months ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆54Feb 6, 2023Updated 3 years ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆854Aug 16, 2024Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆946Aug 14, 2024Updated last year
- ☆30Jun 19, 2023Updated 2 years ago
- ☆21Jun 22, 2025Updated 7 months ago
- A python sdk for LLM finetuning and inference on runpod infrastructure☆17Updated this week
- ☆193Nov 26, 2023Updated 2 years ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆84Oct 23, 2024Updated last year
- ☆51Oct 23, 2023Updated 2 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Oct 17, 2022Updated 3 years ago
- Code for "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors"☆64Jan 14, 2020Updated 6 years ago
- Decomposing and Editing Predictions by Modeling Model Computation☆139Jun 12, 2024Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆567Aug 7, 2025Updated 6 months ago
- ☆15Jul 24, 2022Updated 3 years ago