UKGovernmentBEIS / inspect_evalsLinks
Collection of evals for Inspect AI
☆144Updated this week
Alternatives and similar repositories for inspect_evals
Users that are interested in inspect_evals are comparing it to the libraries listed below
Sorting:
- METR Task Standard☆147Updated 4 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆94Updated this week
- ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alp…☆60Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆184Updated 6 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆105Updated last year
- Scale your LLM-as-a-judge.☆232Updated last week
- ☆86Updated 3 weeks ago
- ☆54Updated 8 months ago
- Improving Alignment and Robustness with Circuit Breakers☆208Updated 8 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆180Updated last week
- Steering vectors for transformer language models in Pytorch / Huggingface☆101Updated 3 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆223Updated 8 months ago
- ☆76Updated last month
- ☆131Updated 2 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆121Updated this week
- ☆10Updated 10 months ago
- Open source interpretability artefacts for R1.☆140Updated last month
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆76Updated last year
- ☆152Updated 2 months ago
- ☆171Updated last month
- Inference API for many LLMs and other useful tools for empirical research☆47Updated last week
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆169Updated this week
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆142Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆200Updated 5 months ago
- ☆22Updated this week
- ☆62Updated last week
- Extract full next-token probabilities via language model APIs☆248Updated last year
- ☆274Updated 11 months ago
- ☆132Updated 7 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆107Updated last year