UKGovernmentBEIS / inspect_aiView external linksLinks
Inspect: A framework for large language model evaluations
☆1,727Feb 7, 2026Updated last week
Alternatives and similar repositories for inspect_ai
Users that are interested in inspect_ai are comparing it to the libraries listed below
Sorting:
- Collection of evals for Inspect AI☆361Updated this week
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆153Feb 4, 2026Updated last week
- METR Task Standard☆173Feb 3, 2025Updated last year
- ☆54May 28, 2024Updated last year
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆134Updated this week
- DSPy: The framework for programming—not prompting—language models☆32,156Updated this week
- A library for mechanistic interpretability of GPT-style language models☆3,073Updated this week
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Jan 26, 2026Updated 2 weeks ago
- structured outputs for llms☆12,357Updated this week
- ☆118Jan 19, 2026Updated 3 weeks ago
- The LLM Evaluation Framework☆13,613Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,293Jan 21, 2026Updated 3 weeks ago
- A framework for few-shot evaluation of language models.☆11,393Updated this week
- ☆41Updated this week
- AI Observability & Evaluation☆8,530Updated this week
- Structured Outputs☆13,403Feb 6, 2026Updated last week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,852May 17, 2025Updated 8 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,084Jan 26, 2026Updated 2 weeks ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,594Dec 20, 2025Updated last month
- Go ahead and axolotl questions☆11,289Updated this week
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,…☆10,339Updated this week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆811Updated this week
- ☆223Feb 6, 2026Updated last week
- ☆929Feb 4, 2026Updated last week
- ☆133Oct 16, 2025Updated 3 months ago
- Adding guardrails to large language models.☆6,399Updated this week
- A Kubernetes sandbox environment for use with inspect_ai☆26Feb 6, 2026Updated last week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,885Updated this week
- Training Sparse Autoencoders on Language Models☆1,201Updated this week
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,111Feb 6, 2026Updated last week
- ☆171Jun 3, 2024Updated last year
- ☆79May 27, 2024Updated last year
- Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasets☆4,852Updated this week
- A guidance language for controlling large language models.☆21,270Feb 6, 2026Updated last week
- Machine Learning for Alignment Bootcamp☆81Apr 27, 2022Updated 3 years ago
- ☆65Updated this week
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,667Updated this week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Jul 29, 2025Updated 6 months ago
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.☆17,663Nov 3, 2025Updated 3 months ago