UKGovernmentBEIS / inspect_aiLinks
Inspect: A framework for large language model evaluations
☆1,360Updated last week
Alternatives and similar repositories for inspect_ai
Users that are interested in inspect_ai are comparing it to the libraries listed below
Sorting:
- Collection of evals for Inspect AI☆250Updated this week
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,031Updated 5 months ago
- METR Task Standard☆162Updated 8 months ago
- A library for making RepE control vectors☆647Updated 2 weeks ago
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆1,662Updated this week
- A library for generative social simulation☆1,032Updated last week
- Evaluate your LLM's response with Prometheus and GPT4 💯☆996Updated 5 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,987Updated this week
- utilities for decoding deep representations (like sentence embeddings) back to text☆949Updated 2 months ago
- Weave is a toolkit for developing AI-powered applications, built by Weights & Biases.☆1,000Updated last week
- ☆735Updated last week
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆115Updated this week
- End-to-end Generative Optimization for AI Agents☆658Updated last month
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆99Updated this week
- ☆35Updated 2 weeks ago
- Code and Data for Tau-Bench☆876Updated last month
- Inference-time scaling for LLMs-as-a-judge.☆300Updated last week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆816Updated 2 months ago
- Training Sparse Autoencoders on Language Models☆985Updated this week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆676Updated this week
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆815Updated last month
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,066Updated 8 months ago
- A small library of LLM judges☆288Updated 2 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,663Updated this week
- ☆211Updated last month
- A tool for evaluating LLMs☆423Updated last year
- Automatically evaluate your LLMs in Google Colab☆661Updated last year
- A library for mechanistic interpretability of GPT-style language models☆2,642Updated this week
- open source interpretability platform 🧠☆442Updated this week
- Benchmarks for the Evaluation of LLM Supervision☆32Updated 3 months ago