Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.
☆115Jun 13, 2024Updated last year
Alternatives and similar repositories for rlhf_trojan_competition
Users that are interested in rlhf_trojan_competition are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆65Apr 24, 2024Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆56Aug 17, 2024Updated last year
- ☆19Feb 25, 2024Updated 2 years ago
- This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Lang…☆22Jul 3, 2024Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆91May 19, 2024Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Apr 28, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆260Sep 24, 2024Updated last year
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Jul 3, 2021Updated 4 years ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆381Jan 23, 2025Updated last year
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆22Oct 18, 2024Updated last year
- ☆13Jul 12, 2024Updated last year
- ☆35Sep 13, 2023Updated 2 years ago
- ☆59Mar 9, 2023Updated 3 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆18Aug 15, 2022Updated 3 years ago
- ☆30Jun 19, 2023Updated 2 years ago
- A python sdk for LLM finetuning and inference on runpod infrastructure☆25Updated this week
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆83Oct 23, 2024Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆176Feb 20, 2024Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆54Feb 6, 2023Updated 3 years ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆924Aug 16, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆227Dec 10, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆20Feb 11, 2024Updated 2 years ago
- Code for T-MARS data filtering☆35Aug 23, 2023Updated 2 years ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆95Apr 8, 2026Updated last week
- Code for "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors"☆64Jan 14, 2020Updated 6 years ago
- ☆70Feb 4, 2024Updated 2 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆63May 8, 2023Updated 2 years ago
- ☆25Nov 11, 2025Updated 5 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆141Mar 9, 2024Updated 2 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code for Voice Jailbreak Attacks Against GPT-4o.☆38May 31, 2024Updated last year
- A benchmark for mechanistic discovery of circuits in Transformers☆16Dec 15, 2024Updated last year
- ☆15Jul 24, 2022Updated 3 years ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- ☆199Nov 26, 2023Updated 2 years ago
- ☆24Jul 25, 2024Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆983Aug 14, 2024Updated last year