aiverify-foundation / LLM-Evals-CatalogueLinks
This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation frameworks, benchmarks and papers.
☆19Updated 2 years ago
Alternatives and similar repositories for LLM-Evals-Catalogue
Users that are interested in LLM-Evals-Catalogue are comparing it to the libraries listed below
Sorting:
- Sample notebooks and prompts for LLM evaluation☆159Updated 2 months ago
- ☆76Updated last year
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆164Updated 2 weeks ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆116Updated 5 months ago
- EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other la…☆92Updated last month
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆248Updated 3 weeks ago
- ☆147Updated last year
- Research repository on interfacing LLMs with Weaviate APIs. Inspired by the Berkeley Gorilla LLM.☆140Updated 4 months ago
- An index of all of our weekly concepts + code events for aspiring AI Engineers and Business Leaders!!☆95Updated 2 weeks ago
- ☆20Updated last year
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆113Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆183Updated last year
- What, Why and How of LLMs.☆75Updated 3 months ago
- a1facts - the precision layer for AI agents☆64Updated 3 months ago
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆507Updated 10 months ago
- ☆39Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆119Updated 9 months ago
- The Rule-based Retrieval package is a Python package that enables you to create and manage Retrieval Augmented Generation (RAG) applicati…☆246Updated last year
- This software contains an agent based on LangGraph & LangChain for solving general requests in the Whatsapp channel of this medical clini…☆212Updated last year
- A curated list of awesome synthetic data tools (open source and commercial).☆231Updated 2 years ago
- Mistral + Haystack: build RAG pipelines that rock 🤘☆106Updated last year
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆137Updated 2 weeks ago
- Tutorial for building LLM router☆241Updated last year
- This repository implements the chain of verification paper by Meta AI☆188Updated 2 years ago
- A Lightweight Library for AI Observability☆253Updated 10 months ago
- A reimplementation of langgraph's customer support example in Rasa's CALM paradigm and a quantiative evaluation of the 2 approaches☆80Updated 9 months ago
- ☆66Updated last year
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆295Updated this week
- ☆89Updated 8 months ago