centerforaisafety / Intro_to_ML_Safety
☆67Updated last year
Alternatives and similar repositories for Intro_to_ML_Safety:
Users that are interested in Intro_to_ML_Safety are comparing it to the libraries listed below
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆110Updated 7 months ago
- Starter kit and data loading code for the Trojan Detection Challenge NeurIPS 2022 competition☆33Updated last year
- Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆42Updated 3 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆92Updated 10 months ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆82Updated 7 months ago
- A curated list of awesome resources for Artificial Intelligence Alignment research☆69Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆174Updated 3 months ago
- ☆51Updated last year
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆92Updated 8 months ago
- we got you bro☆33Updated 5 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆66Updated 10 months ago
- Discount jupyter.☆47Updated 2 years ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Updated 8 months ago
- Fluent student-teacher redteaming☆19Updated 5 months ago
- Code to break Llama Guard☆31Updated last year
- ☆201Updated 3 months ago
- A fast, effective data attribution method for neural networks in PyTorch☆187Updated last month
- ☆30Updated 3 months ago
- ☆41Updated this week
- ☆31Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆29Updated last year
- Python package for measuring memorization in LLMs.☆134Updated last month
- ☆33Updated last year
- Privacy backdoors☆51Updated 8 months ago
- ☆19Updated 5 months ago
- 🧠 Starter templates for doing interpretability research☆64Updated last year
- Repo for the research paper "Aligning LLMs to Be Robust Against Prompt Injection"☆32Updated last month
- Tools for studying developmental interpretability in neural networks.☆82Updated 3 weeks ago
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆77Updated last year
- A resource repository for representation engineering in large language models☆90Updated 2 months ago