IBM / eval-assistLinks
EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other large language models by supporting users in iteratively refining evaluation criteria in a web-based user experience.
☆92Updated last month
Alternatives and similar repositories for eval-assist
Users that are interested in eval-assist are comparing it to the libraries listed below
Sorting:
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆137Updated 2 weeks ago
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆19Updated 2 years ago
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆164Updated 2 weeks ago
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆248Updated 3 weeks ago
- ☆147Updated last year
- SynthGenAI - Package for Generating Synthetic Datasets using LLMs.☆54Updated last month
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆119Updated 9 months ago
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆125Updated 2 months ago
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆335Updated 3 weeks ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆126Updated 3 months ago
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆212Updated this week
- ☆104Updated 9 months ago
- ☆25Updated 8 months ago
- ☆20Updated last year
- The Agent Lifecycle Toolkit (ALTK) is a library of components to help agent builders improve their agent with minimal integration effort …☆103Updated this week
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆173Updated last week
- Research repository on interfacing LLMs with Weaviate APIs. Inspired by the Berkeley Gorilla LLM.☆140Updated 4 months ago
- A collection of LlamaIndex Workflows-powered agents that convert natural language to Cypher queries designed to retrieve information from…☆96Updated 10 months ago
- Simple UI for debugging correlations of text embeddings☆306Updated 7 months ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆423Updated last week
- A method for steering llms to better follow instructions☆74Updated 5 months ago
- Official Implementation of "Affordable AI Assistants with Knowledge Graph of Thoughts"☆205Updated 2 weeks ago
- ☆39Updated last year
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆113Updated last year
- A practical RAG where you can download and chat with github repo☆95Updated 11 months ago
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆183Updated last year
- A small library of LLM judges☆311Updated 5 months ago
- A Lightweight Library for AI Observability☆253Updated 10 months ago
- CUGA is an open-source generalist agent for the enterprise, supporting complex task execution on web and APIs, OpenAPI/MCP integrations, …☆623Updated 3 weeks ago
- An open-source compliance-centered evaluation framework for Generative AI models☆178Updated 2 weeks ago