centerforaisafety / Intro_to_ML_SafetyLinks
☆74Updated 2 years ago
Alternatives and similar repositories for Intro_to_ML_Safety
Users that are interested in Intro_to_ML_Safety are comparing it to the libraries listed below
Sorting:
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆114Updated last year
- Starter kit and data loading code for the Trojan Detection Challenge NeurIPS 2022 competition☆33Updated 2 years ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆89Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆217Updated 10 months ago
- ☆34Updated 2 years ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆61Updated 3 months ago
- ☆23Updated last year
- ☆43Updated 11 months ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆93Updated this week
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆18Updated 5 months ago
- ☆57Updated 2 years ago
- ☆31Updated 2 years ago
- Improving Alignment and Robustness with Circuit Breakers☆233Updated 11 months ago
- OS-Harm: A Benchmark for Measuring Safety of Computer Use Agents [NeurIPS 2025 Spotlight]☆29Updated this week
- Python package for measuring memorization in LLMs.☆166Updated 2 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆138Updated 3 months ago
- ☆32Updated 4 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Updated last year
- Fluent student-teacher redteaming☆22Updated last year
- A resource repository for representation engineering in large language models☆135Updated 10 months ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆115Updated last year
- Code to break Llama Guard☆32Updated last year
- Code to the paper: The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence☆16Updated last month
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆111Updated 6 months ago
- ☆107Updated 7 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆124Updated 7 months ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environments☆74Updated 2 years ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆32Updated 2 years ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆128Updated 2 months ago