google-deepmind / dangerous-capability-evaluations
☆54Updated 7 months ago
Alternatives and similar repositories for dangerous-capability-evaluations
Users that are interested in dangerous-capability-evaluations are comparing it to the libraries listed below
Sorting:
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆103Updated last year
- ☆74Updated 3 weeks ago
- METR Task Standard☆146Updated 3 months ago
- ☆27Updated last year
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆92Updated this week
- Measuring the situational awareness of language models☆34Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated 11 months ago
- ☆22Updated this week
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆69Updated 10 months ago
- we got you bro☆35Updated 9 months ago
- ☆129Updated last month
- Sparse Autoencoder Training Library☆49Updated 2 weeks ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 5 months ago
- ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alp…☆57Updated this week
- ☆92Updated last month
- Collection of evals for Inspect AI☆132Updated this week
- ☆25Updated 2 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆172Updated this week
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Updated last year
- ☆10Updated 10 months ago
- ☆132Updated 6 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆75Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆79Updated last month
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆115Updated last year
- Functional Benchmarks and the Reasoning Gap☆86Updated 7 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆105Updated last year
- 🧠 Starter templates for doing interpretability research☆70Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆121Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆100Updated 2 months ago