centerforaisafety / Intro_to_ML_SafetyLinks
☆72Updated 2 years ago
Alternatives and similar repositories for Intro_to_ML_Safety
Users that are interested in Intro_to_ML_Safety are comparing it to the libraries listed below
Sorting:
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆113Updated last year
- ☆34Updated last year
- Starter kit and data loading code for the Trojan Detection Challenge NeurIPS 2022 competition☆33Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆59Updated last month
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆212Updated 7 months ago
- ☆22Updated 11 months ago
- Improving Alignment and Robustness with Circuit Breakers☆220Updated 9 months ago
- ☆40Updated 9 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆109Updated last year
- ☆273Updated last year
- A resource repository for representation engineering in large language models☆127Updated 8 months ago
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆18Updated 3 months ago
- ☆182Updated 3 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆128Updated last month
- ☆231Updated 9 months ago
- Python package for measuring memorization in LLMs.☆160Updated this week
- ☆39Updated 8 months ago
- ☆55Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆207Updated this week
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆115Updated 4 months ago
- Fluent student-teacher redteaming☆22Updated 11 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆95Updated last year
- This repository collects all relevant resources about interpretability in LLMs☆363Updated 8 months ago
- ☆219Updated last year
- Algebraic value editing in pretrained language models☆65Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆102Updated 3 weeks ago
- Aligning AI With Shared Human Values (ICLR 2021)☆289Updated 2 years ago
- A library for mechanistic anomaly detection☆22Updated 6 months ago