IBM / eval-assistLinks
EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other large language models by supporting users in iteratively refining evaluation criteria in a web-based user experience.
☆80Updated this week
Alternatives and similar repositories for eval-assist
Users that are interested in eval-assist are comparing it to the libraries listed below
Sorting:
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆130Updated this week
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆232Updated 2 weeks ago
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆116Updated 5 months ago
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆263Updated this week
- SynthGenAI - Package for Generating Synthetic Datasets using LLMs.☆47Updated this week
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆209Updated this week
- Synthetic Text Dataset Generation for LLM projects☆41Updated last week
- ☆146Updated last year
- ☆95Updated 6 months ago
- 🧠🔗 From idea to production in just few lines: Graph-Based Programmable Neuro-Symbolic LM Framework - a production-first LM framework bu…☆321Updated this week
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆18Updated last year
- all code examples in the blog posts☆21Updated 8 months ago
- ☆206Updated 3 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆140Updated this week
- ☆73Updated 11 months ago
- A blueprint for AI development, focusing on applied examples of RAG, information extraction, analysis and fine-tuning in the age of LLMs …☆59Updated 7 months ago
- Efficiently find the best-suited language model (LM) for your NLP task☆128Updated 2 months ago
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆179Updated last year
- This repo is the central repo for all the RAG Evaluation reference material and partner workshop☆76Updated 5 months ago
- An open-source compliance-centered evaluation framework for Generative AI models☆164Updated this week
- Simple UI for debugging correlations of text embeddings☆291Updated 3 months ago
- Named Entity Recognition using Claude Citations☆79Updated 3 months ago
- 🧪 Experimental features for Haystack☆51Updated this week
- Research repository on interfacing LLMs with Weaviate APIs. Inspired by the Berkeley Gorilla LLM.☆135Updated last month
- A practical RAG where you can download and chat with github repo☆89Updated 7 months ago
- ☆80Updated last year
- A small library of LLM judges☆285Updated last month
- A method for steering llms to better follow instructions☆53Updated last month
- Chunk your text using gpt4o-mini more accurately☆44Updated last year