IBM / eval-assistLinks
EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other large language models by supporting users in iteratively refining evaluation criteria in a web-based user experience.
☆92Updated last week
Alternatives and similar repositories for eval-assist
Users that are interested in eval-assist are comparing it to the libraries listed below
Sorting:
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆154Updated this week
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆242Updated last week
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆317Updated this week
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆136Updated this week
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆211Updated last week
- Official Implementation of "Affordable AI Assistants with Knowledge Graph of Thoughts"☆195Updated last month
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆114Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆117Updated 8 months ago
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆179Updated last year
- 🧠🔗 From idea to production in just few lines: Graph-Based Programmable Neuro-Symbolic LM Framework - a production-first LM framework bu…☆359Updated 2 weeks ago
- ☆146Updated last year
- SynthGenAI - Package for Generating Synthetic Datasets using LLMs.☆50Updated this week
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆123Updated 3 weeks ago
- all code examples in the blog posts☆21Updated 10 months ago
- Simple UI for debugging correlations of text embeddings☆301Updated 6 months ago
- A small library of LLM judges☆302Updated 3 months ago
- this project will bootstrap and scaffold the projects for specific semantic search and RAG applications along with regular boiler plate c…☆92Updated 11 months ago
- Synthetic Text Dataset Generation for LLM projects☆47Updated last week
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- A practical RAG where you can download and chat with github repo☆92Updated 9 months ago
- This repo is the central repo for all the RAG Evaluation reference material and partner workshop☆76Updated 7 months ago
- Semantic Chunker is a lightweight Python package for semantically-aware chunking and clustering of text.☆280Updated 7 months ago
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆18Updated 2 years ago
- Sample notebooks and prompts for LLM evaluation☆156Updated 3 weeks ago
- ☆20Updated last year
- 📝 Automatically annotate papers using LLMs☆361Updated 7 months ago
- 🧪 Experimental features for Haystack☆57Updated this week
- Generalist and Lightweight Model for Text Classification☆165Updated 5 months ago
- Unified Schema-Based Information Extraction☆223Updated 3 weeks ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆121Updated last month