UKGovernmentBEIS / inspect_aiLinks
Inspect: A framework for large language model evaluations
☆1,554Updated last week
Alternatives and similar repositories for inspect_ai
Users that are interested in inspect_ai are comparing it to the libraries listed below
Sorting:
- Collection of evals for Inspect AI☆297Updated last week
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,034Updated 7 months ago
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆1,984Updated last week
- A library for making RepE control vectors☆670Updated 2 months ago
- METR Task Standard☆168Updated 10 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,018Updated 7 months ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆134Updated this week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 4 months ago
- ☆37Updated last week
- open source interpretability platform 🧠☆515Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,185Updated this week
- Weave is a toolkit for developing AI-powered applications, built by Weights & Biases.☆1,025Updated last week
- Inference-time scaling for LLMs-as-a-judge.☆316Updated last month
- utilities for decoding deep representations (like sentence embeddings) back to text☆1,020Updated 4 months ago
- Training Sparse Autoencoders on Language Models☆1,104Updated last week
- Synthetic data curation for post-training and structured data extraction☆1,572Updated 4 months ago
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆502Updated 10 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆836Updated last month
- A small library of LLM judges☆306Updated 4 months ago
- A tool for evaluating LLMs☆428Updated last year
- 🤗 Benchmark Large Language Models Reliably On Your Data☆418Updated this week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆716Updated last week
- Automatically evaluate your LLMs in Google Colab☆675Updated last year
- Code and Data for Tau-Bench☆987Updated 3 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,757Updated 2 weeks ago
- ☆835Updated last month
- End-to-end Generative Optimization for AI Agents☆682Updated this week
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,084Updated last month
- Guide for fine-tuning Llama/Mistral/CodeLlama models and more☆638Updated last month
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,262Updated last month