Arize-ai / phoenixLinks
AI Observability & Evaluation
β6,962Updated this week
Alternatives and similar repositories for phoenix
Users that are interested in phoenix are comparing it to the libraries listed below
Sorting:
- Evaluation and Tracking for LLM Experiments and AI Agentsβ2,787Updated this week
- Supercharge Your LLM Application Evaluations πβ10,746Updated this week
- Superfast AI decision making and intelligent processing of multi-modal data.β2,780Updated last month
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,β¦β8,397Updated this week
- Adding guardrails to large language models.β5,647Updated last week
- AdalFlow: The library to build & auto-optimize LLM applications.β3,701Updated this week
- The LLM Evaluation Frameworkβ10,742Updated this week
- Harness LLMs with Multi-Agent Programmingβ3,695Updated 2 weeks ago
- Knowledge Agents and Management in the Cloudβ4,139Updated this week
- πͺ’ Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Openβ¦β16,196Updated this week
- Convert documents to structured data effortlessly. Unstructured is open-source ETL solution for transforming complex documents into cleanβ¦β12,683Updated this week
- A framework for serving and evaluating LLM routers - save LLM costs without compromising qualityβ4,274Updated last year
- Deploy your agentic worfklows to productionβ2,055Updated 2 weeks ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achβ¦β5,423Updated 6 months ago
- LangServe π¦οΈπβ2,160Updated 2 months ago
- Python SDK for AI agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks including Cβ¦β4,884Updated last week
- Desktop app for prototyping and debugging LangGraph applications locally.β3,215Updated 2 months ago
- π§ Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 πβ4,503Updated this week
- π’ Open-Source Evaluation & Testing library for LLM Agentsβ4,878Updated last week
- structured outputs for llmsβ11,421Updated last week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-β¦β3,662Updated 4 months ago
- Build applications that make decisions (chatbots, agents, simulations, etc...). Monitor, trace, persist, and execute on your own infrastrβ¦β1,792Updated this week
- Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.β2,371Updated last week
- Build Conversational AI in minutes β‘οΈβ10,639Updated this week
- Langtrace π is an open-source, Open Telemetry based end-to-end observability tool for LLM applications, providing real-time tracing, evβ¦β1,024Updated 4 months ago
- Fast, Accurate, Lightweight Python library to make State of the Art Embeddingβ2,371Updated 3 weeks ago
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroβ¦β2,932Updated last year
- The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.β3,154Updated this week
- The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)β5,785Updated last week
- The open LLM Ops platform - Traces, Analytics, Evaluations, Datasets and Prompt Optimization β¨β2,466Updated this week