UKGovernmentBEIS / inspect_evalsLinks
Collection of evals for Inspect AI
☆297Updated this week
Alternatives and similar repositories for inspect_evals
Users that are interested in inspect_evals are comparing it to the libraries listed below
Sorting:
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆306Updated 5 months ago
- Improving Alignment and Robustness with Circuit Breakers☆245Updated last year
- METR Task Standard☆168Updated 10 months ago
- ☆191Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆216Updated last year
- Persona Vectors: Monitoring and Controlling Character Traits in Language Models☆296Updated 4 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆130Updated 9 months ago
- Open source interpretability artefacts for R1.☆164Updated 7 months ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆132Updated this week
- ☆229Updated this week
- ☆119Updated last month
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆122Updated last year
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆156Updated 6 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆319Updated last month
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆172Updated last year
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆120Updated 3 weeks ago
- ☆66Updated 2 months ago
- Inference-time scaling for LLMs-as-a-judge.☆314Updated last month
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆97Updated 2 years ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆112Updated this week
- Automatic evals for LLMs☆559Updated 5 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆95Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆195Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆231Updated this week
- A Comprehensive Assessment of Trustworthiness in GPT Models☆309Updated last year
- ☆62Updated 2 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆100Updated 2 years ago
- WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?☆220Updated last week
- open source interpretability platform 🧠☆515Updated this week
- ☆300Updated 4 months ago