UKGovernmentBEIS / inspect_aiLinks
Inspect: A framework for large language model evaluations
☆1,274Updated this week
Alternatives and similar repositories for inspect_ai
Users that are interested in inspect_ai are comparing it to the libraries listed below
Sorting:
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,029Updated 4 months ago
- Collection of evals for Inspect AI☆211Updated this week
- A library for making RepE control vectors☆626Updated 7 months ago
- METR Task Standard☆159Updated 6 months ago
- utilities for decoding deep representations (like sentence embeddings) back to text☆932Updated 3 weeks ago
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆1,545Updated 7 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,851Updated this week
- Evaluate your LLM's response with Prometheus and GPT4 💯☆981Updated 4 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆813Updated last month
- System 2 Reasoning Link Collection☆853Updated 5 months ago
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,053Updated 6 months ago
- A small library of LLM judges☆271Updated last month
- Inference-time scaling for LLMs-as-a-judge.☆283Updated last month
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆799Updated last week
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆88Updated this week
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,003Updated 3 weeks ago
- Official inference library for pre-processing of Mistral models☆784Updated this week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆651Updated this week
- Code and Data for Tau-Bench☆791Updated last month
- End-to-end Generative Optimization for AI Agents☆642Updated 2 weeks ago
- Training Sparse Autoencoders on Language Models☆935Updated this week
- ☆31Updated last week
- Automatically evaluate your LLMs in Google Colab☆655Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,558Updated 2 weeks ago
- ☆685Updated this week
- A library for prompt engineering and optimization (SAMMO = Structure-aware Multi-Objective Metaprompt Optimization)☆725Updated 2 months ago
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,510Updated 6 months ago
- Sparsify transformers with SAEs and transcoders☆609Updated last week
- A tool for evaluating LLMs☆424Updated last year
- open source interpretability platform 🧠☆356Updated last week