IBM / eval-assistLinks
EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other large language models by supporting users in iteratively refining evaluation criteria in a web-based user experience.
☆94Updated 2 months ago
Alternatives and similar repositories for eval-assist
Users that are interested in eval-assist are comparing it to the libraries listed below
Sorting:
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆252Updated last month
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆176Updated 2 weeks ago
- A small library of LLM judges☆321Updated 6 months ago
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆343Updated last week
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆139Updated 3 weeks ago
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆119Updated 10 months ago
- The Agent Lifecycle Toolkit (ALTK) is a library of components to help agent builders improve their agent with minimal integration effort …☆109Updated last week
- SynthGenAI - Package for Generating Synthetic Datasets using LLMs.☆54Updated 2 months ago
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆184Updated last year
- 🧪 Experimental features for Haystack☆59Updated 2 weeks ago
- ☆147Updated last year
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆113Updated last year
- Collection of resources for RL and Reasoning☆27Updated last year
- Research repository on interfacing LLMs with Weaviate APIs. Inspired by the Berkeley Gorilla LLM.☆140Updated 5 months ago
- ☆22Updated last year
- ☆107Updated 10 months ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆426Updated last month
- Simple UI for debugging correlations of text embeddings☆305Updated 8 months ago
- ☆20Updated last year
- A method for steering llms to better follow instructions☆78Updated 6 months ago
- all code examples in the blog posts☆21Updated last year
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆212Updated 2 weeks ago
- A reimplementation of langgraph's customer support example in Rasa's CALM paradigm and a quantiative evaluation of the 2 approaches☆81Updated 10 months ago
- Efficiently find the best-suited language model (LM) for your NLP task☆134Updated 6 months ago
- This repo is the central repo for all the RAG Evaluation reference material and partner workshop☆80Updated 9 months ago
- Chunk your text using gpt4o-mini more accurately☆44Updated last year
- A blueprint for AI development, focusing on applied examples of RAG, information extraction, analysis and fine-tuning in the age of LLMs …☆61Updated last year
- A Lightweight Library for AI Observability☆255Updated 11 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆130Updated 4 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year