UKGovernmentBEIS / inspect_aiLinks
Inspect: A framework for large language model evaluations
☆1,431Updated this week
Alternatives and similar repositories for inspect_ai
Users that are interested in inspect_ai are comparing it to the libraries listed below
Sorting:
- Collection of evals for Inspect AI☆272Updated this week
- METR Task Standard☆163Updated 8 months ago
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,033Updated 6 months ago
- A library for making RepE control vectors☆655Updated last month
- utilities for decoding deep representations (like sentence embeddings) back to text☆961Updated 2 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,006Updated 6 months ago
- Inference-time scaling for LLMs-as-a-judge.☆304Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,044Updated this week
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆115Updated this week
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆1,732Updated 3 weeks ago
- open source interpretability platform 🧠☆466Updated this week
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆118Updated last week
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆490Updated 8 months ago
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,073Updated 8 months ago
- ☆764Updated last month
- End-to-end Generative Optimization for AI Agents☆667Updated 2 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆821Updated 2 weeks ago
- Code and Data for Tau-Bench☆915Updated 2 months ago
- Weave is a toolkit for developing AI-powered applications, built by Weights & Biases.☆1,012Updated this week
- Automatically evaluate your LLMs in Google Colab☆664Updated last year
- Training Sparse Autoencoders on Language Models☆1,015Updated this week
- A tool for evaluating LLMs☆425Updated last year
- A small library of LLM judges☆296Updated 3 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆733Updated this week
- Benchmarks for the Evaluation of LLM Supervision☆32Updated 3 weeks ago
- Sparsify transformers with SAEs and transcoders☆647Updated this week
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,368Updated last month
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆685Updated last week
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,064Updated last month
- ☆35Updated last month