confident-ai / deepevalLinks
The LLM Evaluation Framework
☆10,885Updated this week
Alternatives and similar repositories for deepeval
Users that are interested in deepeval are comparing it to the libraries listed below
Sorting:
- Supercharge Your LLM Application Evaluations 🚀☆10,746Updated last week
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,…☆8,397Updated this week
- 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Open…☆16,196Updated this week
- Evaluation and Tracking for LLM Experiments and AI Agents☆2,787Updated this week
- AI Observability & Evaluation☆7,021Updated this week
- Adding guardrails to large language models.☆5,680Updated this week
- structured outputs for llms☆11,458Updated this week
- GenAI Agent Framework, the Pydantic way☆12,608Updated this week
- Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement…☆7,309Updated this week
- AdalFlow: The library to build & auto-optimize LLM applications.☆3,725Updated this week
- Harness LLMs with Multi-Agent Programming☆3,706Updated 2 weeks ago
- Convert documents to structured data effortlessly. Unstructured is open-source ETL solution for transforming complex documents into clean…☆12,683Updated this week
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,796Updated last month
- Build resilient language agents as graphs.☆18,892Updated this week
- Build effective agents using Model Context Protocol and simple workflow patterns☆7,317Updated this week
- The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)☆5,873Updated this week
- DSPy: The framework for programming—not prompting—language models☆28,412Updated this week
- Go ahead and axolotl questions☆10,458Updated this week
- 🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓☆4,519Updated this week
- Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sag…☆29,111Updated this week
- SoTA production-ready AI retrieval system. Agentic Retrieval-Augmented Generation (RAG) with a RESTful API.☆7,302Updated last month
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,423Updated 6 months ago
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,280Updated last year
- Memory for AI Agents in 5 lines of code☆7,072Updated this week
- Build Real-Time Knowledge Graphs for AI Agents☆18,228Updated last week
- The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.☆3,169Updated this week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,682Updated 4 months ago
- RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry☆4,246Updated 3 weeks ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,888Updated last week
- Building AI agents, atomically☆4,990Updated this week