centerforaisafety / Intro_to_ML_Safety
☆66Updated last year
Alternatives and similar repositories for Intro_to_ML_Safety:
Users that are interested in Intro_to_ML_Safety are comparing it to the libraries listed below
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆109Updated 8 months ago
- ☆34Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆46Updated 3 weeks ago
- Starter kit and data loading code for the Trojan Detection Challenge NeurIPS 2022 competition☆33Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆84Updated 9 months ago
- METR Task Standard☆142Updated 2 weeks ago
- ☆20Updated 6 months ago
- Discount jupyter.☆48Updated 2 years ago
- ☆32Updated last year
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆66Updated 11 months ago
- ☆31Updated 4 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆88Updated last year
- ☆52Updated last year
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆101Updated 9 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆195Updated last week
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Updated 9 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆94Updated this week
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆68Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆192Updated 3 months ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆30Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆49Updated 6 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆93Updated 11 months ago
- ☆205Updated 4 months ago
- ☆36Updated last year
- ☆128Updated 3 months ago
- ☆30Updated 2 months ago
- Machine Learning for Alignment Bootcamp☆70Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆84Updated last week
- we got you bro☆35Updated 6 months ago
- A resource repository for representation engineering in large language models☆102Updated 3 months ago