Arize-ai / phoenixLinks
AI Observability & Evaluation
☆8,049Updated this week
Alternatives and similar repositories for phoenix
Users that are interested in phoenix are comparing it to the libraries listed below
Sorting:
- Adding guardrails to large language models.☆6,198Updated last week
- Evaluation and Tracking for LLM Experiments and AI Agents☆2,993Updated this week
- Supercharge Your LLM Application Evaluations 🚀☆11,964Updated this week
- structured outputs for llms☆12,045Updated last week
- Harness LLMs with Multi-Agent Programming☆3,816Updated this week
- Superfast AI decision making and intelligent processing of multi-modal data.☆3,122Updated last month
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,…☆9,630Updated this week
- AdalFlow: The library to build & auto-optimize LLM applications.☆3,945Updated last week
- The LLM Evaluation Framework☆12,733Updated this week
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,448Updated this week
- An awesome & curated list of best LLMOps tools for developers☆5,517Updated 2 weeks ago
- LangServe 🦜️🏓☆2,229Updated 2 months ago
- 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Open…☆19,644Updated last week
- Structured Outputs☆13,161Updated 2 weeks ago
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,803Updated 7 months ago
- Zep | Examples, Integrations, & More☆3,887Updated this week
- Python SDK for AI agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks including C…☆5,156Updated last month
- Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.☆8,334Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,570Updated 7 months ago
- 🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓☆4,862Updated this week
- An open-source visual programming environment for battle-testing prompts to LLMs.☆2,901Updated 2 weeks ago
- Deploy your agentic worfklows to production☆2,065Updated 2 weeks ago
- The open LLM Ops platform - Traces, Analytics, Evaluations, Datasets and Prompt Optimization ✨☆2,698Updated last week
- Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.☆2,452Updated last week
- DSPy: The framework for programming—not prompting—language models☆31,066Updated this week
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,489Updated last year
- Open source platform for AI Engineering: OpenTelemetry-native LLM Observability, GPU Monitoring, Guardrails, Evaluations, Prompt Manageme…☆2,109Updated last week
- A language for constraint-guided and efficient LLM programming.☆4,110Updated 7 months ago
- Build Conversational AI in minutes ⚡️☆11,238Updated this week
- The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.☆3,557Updated this week