UKGovernmentBEIS / inspect_evalsLinks
Collection of evals for Inspect AI
☆211Updated this week
Alternatives and similar repositories for inspect_evals
Users that are interested in inspect_evals are comparing it to the libraries listed below
Sorting:
- METR Task Standard☆158Updated 6 months ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆88Updated this week
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆108Updated last week
- ☆117Updated 2 weeks ago
- ☆98Updated 4 months ago
- Improving Alignment and Robustness with Circuit Breakers☆227Updated 11 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆112Updated last year
- ☆195Updated 5 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆198Updated 9 months ago
- Inference-time scaling for LLMs-as-a-judge.☆276Updated last month
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆257Updated 2 months ago
- ☆56Updated 3 weeks ago
- Open source interpretability artefacts for R1.☆158Updated 4 months ago
- Red-Teaming Language Models with DSPy☆208Updated 6 months ago
- ☆101Updated 5 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆135Updated 2 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆302Updated 11 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆122Updated 6 months ago
- Inference API for many LLMs and other useful tools for empirical research☆66Updated last week
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆89Updated last year
- ☆291Updated last year
- ☆45Updated last year
- ☆139Updated last week
- ☆31Updated last week
- ☆163Updated 9 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆245Updated 2 weeks ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆158Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆212Updated 8 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆294Updated this week
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆95Updated last year