AutoEvals is a tool for quickly and easily evaluating AI model outputs using best practices.
☆833Mar 9, 2026Updated last week
Alternatives and similar repositories for autoevals
Users that are interested in autoevals are comparing it to the libraries listed below
Sorting:
- Evaluate your LLM-powered apps with TypeScript☆1,410Feb 20, 2026Updated last month
- JavaScript Tracing & Evals library for Braintrust☆10Updated this week
- The TypeScript LLM Evaluation Library☆155Nov 11, 2025Updated 4 months ago
- ☆387Mar 14, 2026Updated last week
- Prompt design using JSX.☆2,775Oct 15, 2025Updated 5 months ago
- A vitest extension for running evals.☆135Updated this week
- structured outputs for llms☆12,551Updated this week
- Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Ll…☆17,709Updated this week
- The LLM Evaluation Framework☆14,115Mar 13, 2026Updated last week
- Evals meant to evaluate language models' ability to reason over long contexts.☆10Sep 12, 2024Updated last year
- Python SDK for running evaluations on LLM generated responses☆298Jun 6, 2025Updated 9 months ago
- 🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓☆5,270Updated this week
- OTEL ingestion running on Cloudflare Workers☆49Apr 8, 2025Updated 11 months ago
- The pretty much "official" DSPy framework for Typescript☆2,474Updated this week
- A lightweight React Hook intended mainly for AI chat applications, for smoothly sticking to bottom of messages☆698Feb 6, 2026Updated last month
- DSPy: The framework for programming—not prompting—language models☆32,853Updated this week
- Laminar - open-source observability platform purpose-built for AI agents. YC S24.☆2,678Updated this week
- 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Open…☆23,441Updated this week
- AI Observability & Evaluation☆8,904Updated this week
- A Workers AI provider for the vercel AI SDK☆115Mar 18, 2025Updated last year
- The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)☆7,758Mar 14, 2026Updated last week
- From the team behind Gatsby, Mastra is a framework for building AI-powered applications and agents with a modern TypeScript stack.☆22,064Updated this week
- Structured Outputs☆13,564Mar 9, 2026Updated last week
- Supercharge Your LLM Application Evaluations 🚀☆13,008Feb 24, 2026Updated 3 weeks ago
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.☆18,014Nov 3, 2025Updated 4 months ago
- The AI Browser Automation Framework☆21,583Updated this week
- The platform for LLM evaluations and AI agent testing☆3,141Updated this week
- ☆80Jun 5, 2024Updated last year
- Wow!☆12Oct 25, 2024Updated last year
- The leading workflow orchestration platform. Run stateful step functions and AI workflows on serverless, servers, or the edge.☆5,058Updated this week
- Developer toolkit that makes it simple to build with the Workers AI platform.☆181Oct 1, 2024Updated last year
- Supercharge your local development☆367Oct 8, 2025Updated 5 months ago
- an ambient intelligence library☆6,100Mar 14, 2026Updated last week
- AI Hero's open-source examples and course material. Learn AI Engineering with a single repo.☆1,377Jul 22, 2025Updated 7 months ago
- Using various instructor clients evaluating the quality and capabilities of extractions and reasoning.☆51Sep 29, 2024Updated last year
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆39,597Updated this week
- The AI Toolkit for TypeScript. From the creators of Next.js, the AI SDK is a free open-source library for building AI-powered application…☆22,639Updated this week
- Readymade evaluators for agent trajectories☆505Updated this week
- PartyKit, for Workers☆1,021Mar 5, 2026Updated 2 weeks ago