AlmogBaku / pytest-evalsLinks
A pytest plugin for running and analyzing LLM evaluation tests.
☆144Updated 9 months ago
Alternatives and similar repositories for pytest-evals
Users that are interested in pytest-evals are comparing it to the libraries listed below
Sorting:
- Pydantic extension for annotating autocorrecting fields.☆221Updated last year
- Python library that allows you to get structured responses in the form of Pydantic models and Python types from Anthropic, Google Vertex …☆78Updated last month
- Calculate prices for calling LLM inference APIs.☆137Updated last week
- Python browser sandbox.☆181Updated 7 months ago
- Convert an AI Agent into a A2A server! ✨☆134Updated 3 weeks ago
- ☆77Updated 7 months ago
- Promptimize is a prompt engineering evaluation and testing toolkit.☆480Updated last month
- The Logfire MCP Server is here!☆121Updated last month
- Python SDK for Inngest: Durable functions and workflows in Python, hosted anywhere☆144Updated this week
- LLM prompt language based on Jinja. Banks provides tools and functions to build prompts text and chat messages from generic blueprints. I…☆116Updated 3 months ago
- Work with OpenAI's streaming API at ease with Python generators☆122Updated last year
- OpenTelemetry Instrumentation for AI Observability☆700Updated this week
- Quickstart for Hatchet using the Python SDK with examples for common frameworks☆43Updated this week
- RAG orchestration framework ⛵️☆201Updated 3 months ago
- Jambo - JSON Schema to Pydantic Converter☆65Updated last month
- LLM abstractions that aren't obstructions☆1,289Updated this week
- A unit test framework for prompts.☆11Updated 2 years ago
- Additional packages (components, document stores and the likes) to extend the capabilities of Haystack☆168Updated this week
- An AI extension for IPython that makes it work like Cursor☆69Updated 10 months ago
- Library-friendly Agents