A tool for evaluating LLMs
☆428Mar 15, 2026Updated 2 weeks ago
Alternatives and similar repositories for bench
Users that are interested in bench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Python SDK for running evaluations on LLM generated responses☆297Jun 6, 2025Updated 9 months ago
- Supercharge Your LLM Application Evaluations 🚀☆13,106Feb 24, 2026Updated last month
- Fiddler Auditor is a tool to evaluate language models.☆189Mar 11, 2024Updated 2 years ago
- The LLM Evaluation Framework☆14,227Mar 20, 2026Updated last week
- AI Observability & Evaluation☆9,020Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Continuous Integration for LLM powered applications☆255Aug 11, 2023Updated 2 years ago
- An open-source visual programming environment for battle-testing prompts to LLMs.☆2,964Jan 2, 2026Updated 2 months ago
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chro…☆3,029Feb 11, 2026Updated last month
- Adding guardrails to large language models.☆6,585Updated this week
- Hosted embedding platform to discover, evaluate, and retrieve embeddings☆73Sep 21, 2023Updated 2 years ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,143Updated this week
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,854Updated this week
- Automated Evaluation of RAG Systems☆699Mar 28, 2025Updated last year
- [ICLR 2025 Spotlight] An open-sourced LLM judge for evaluating LLM-generated answers.☆424Feb 11, 2025Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- DSPy: The framework for programming—not prompting—language models☆33,038Mar 22, 2026Updated last week
- Go ahead and axolotl questions☆11,508Updated this week
- Retrieval Augmented Generation (RAG) chatbot powered by Weaviate☆7,617Jul 14, 2025Updated 8 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆289Mar 18, 2026Updated last week
- Evaluation and Tracking for LLM Experiments and AI Agents☆3,211Updated this week
- A guidance language for controlling large language models.☆21,362Mar 18, 2026Updated last week
- Promptimize is a prompt engineering evaluation and testing toolkit.☆494Mar 16, 2026Updated last week
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,060Apr 25, 2025Updated 11 months ago
- Python client library for improving your LLM app accuracy☆96Feb 11, 2025Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- A framework for few-shot evaluation of language models.☆11,802Mar 18, 2026Updated last week
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆795May 8, 2024Updated last year
- Sample notebooks and prompts for LLM evaluation☆161Nov 2, 2025Updated 4 months ago
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.☆18,094Updated this week
- LLM Prompt Injection Detector☆1,451Aug 7, 2024Updated last year
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,205Updated this week
- structured outputs for llms☆12,589Updated this week
- Structured Outputs☆13,588Mar 21, 2026Updated last week
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,722Updated this week
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)☆12,786Updated this week
- A python command-line tool to download & manage MLX AI models from Hugging Face.☆19Aug 26, 2024Updated last year
- 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Open…☆23,868Updated this week
- VectorFlow is a high volume vector embedding pipeline that ingests raw data, transforms it into vectors and writes it to a vector DB of y…☆700May 16, 2024Updated last year
- Data-Driven Evaluation for LLM-Powered Applications☆515Jan 22, 2025Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Feb 15, 2024Updated 2 years ago
- An Open-source Framework for Data-centric, Self-evolving Autonomous Language Agents☆5,891Sep 26, 2024Updated last year