Inspect: A framework for large language model evaluations
☆1,800Mar 5, 2026Updated this week
Alternatives and similar repositories for inspect_ai
Users that are interested in inspect_ai are comparing it to the libraries listed below
Sorting:
- Collection of evals for Inspect AI☆393Updated this week
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆158Feb 27, 2026Updated last week
- METR Task Standard☆177Feb 3, 2025Updated last year
- ☆54May 28, 2024Updated last year
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆134Feb 15, 2026Updated 2 weeks ago
- DSPy: The framework for programming—not prompting—language models☆32,519Updated this week
- A library for mechanistic interpretability of GPT-style language models☆3,133Updated this week
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,314Feb 20, 2026Updated 2 weeks ago
- structured outputs for llms☆12,468Feb 25, 2026Updated last week
- The LLM Evaluation Framework☆13,904Updated this week
- AI Observability & Evaluation☆8,746Updated this week
- ☆120Jan 19, 2026Updated last month
- A framework for few-shot evaluation of language models.☆11,540Updated this week
- ☆45Feb 13, 2026Updated 3 weeks ago
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,868May 17, 2025Updated 9 months ago
- Structured Outputs☆13,488Updated this week
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,602Dec 20, 2025Updated 2 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,108Feb 23, 2026Updated last week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆836Updated this week
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,…☆10,821Updated this week
- Go ahead and axolotl questions☆11,395Updated this week
- Adding guardrails to large language models.☆6,492Updated this week
- ☆960Updated this week
- ☆237Updated this week
- ☆133Oct 16, 2025Updated 4 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,915Updated this week
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,141Feb 27, 2026Updated last week
- ☆171Jun 3, 2024Updated last year
- Training Sparse Autoencoders on Language Models☆1,233Feb 27, 2026Updated last week
- A Kubernetes sandbox environment for use with inspect_ai☆27Feb 26, 2026Updated last week
- Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasets☆4,884Updated this week
- ☆79May 27, 2024Updated last year
- A guidance language for controlling large language models.☆21,327Feb 13, 2026Updated 3 weeks ago
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.☆17,929Nov 3, 2025Updated 4 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,693Updated this week
- Machine Learning for Alignment Bootcamp☆82Apr 27, 2022Updated 3 years ago
- ☆65Feb 20, 2026Updated 2 weeks ago
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆2,065Dec 3, 2025Updated 3 months ago