IBM / eval-assistLinks
EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other large language models by supporting users in iteratively refining evaluation criteria in a web-based user experience.
☆66Updated this week
Alternatives and similar repositories for eval-assist
Users that are interested in eval-assist are comparing it to the libraries listed below
Sorting:
- Chunk your text using gpt4o-mini more accurately☆44Updated last year
- Synthetic Text Dataset Generation for LLM projects☆35Updated last week
- Generalist and Lightweight Model for Text Classification☆153Updated 2 months ago
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆113Updated 4 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆125Updated last week
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆114Updated last month
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆206Updated this week
- ☆80Updated last year
- A curated list of materials on AI guardails☆40Updated 2 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 10 months ago
- A method for steering llms to better follow instructions☆49Updated last week
- Official Repo for CRMArena and CRMArena-Pro☆104Updated last month
- Lite weight wrapper for the independent implementation of SPLADE++ models for search & retrieval pipelines. Models and Library created by…☆32Updated 11 months ago
- Source code of "How to Correctly do Semantic Backpropagation on Language-based Agentic Systems" 🤖☆73Updated 8 months ago
- Versatile framework designed to streamline the integration of your models, as well as those sourced from Hugging Face, into complex progr…☆32Updated 4 months ago
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆224Updated 2 weeks ago
- ☆145Updated last year
- Python library to use Pleias-RAG models☆61Updated 3 months ago
- GLiNER model in a FastAPI microservice.☆45Updated 8 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆63Updated 2 months ago
- Plug-and-play document processing pipelines with zero-shot models.☆86Updated 2 weeks ago
- all code examples in the blog posts☆21Updated 6 months ago
- A RAG that can scale 🧑🏻💻☆11Updated last year
- Leveraging Base Language Models for Few-Shot Synthetic Data Generation☆33Updated 2 weeks ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆100Updated 3 months ago
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆107Updated last year
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆132Updated last week
- The Granite Guardian models are designed to detect risks in prompts and responses.☆98Updated last week
- Writing Blog Posts with Generative Feedback Loops!☆50Updated last year