aiverify-foundation / LLM-Evals-CatalogueLinks
This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation frameworks, benchmarks and papers.
☆18Updated 2 years ago
Alternatives and similar repositories for LLM-Evals-Catalogue
Users that are interested in LLM-Evals-Catalogue are comparing it to the libraries listed below
Sorting:
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆162Updated 2 weeks ago
- Sample notebooks and prompts for LLM evaluation☆156Updated last month
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆114Updated last year
- EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other la…☆92Updated 3 weeks ago
- ☆20Updated last year
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆116Updated 4 months ago
- ☆148Updated last year
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆243Updated this week
- What, Why and How of LLMs.☆75Updated 2 months ago
- ☆125Updated 9 months ago
- ☆38Updated last year
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆137Updated this week
- Lean implementation of various multi-agent LLM methods, including Iteration of Thought (IoT)☆124Updated 10 months ago
- Repository to demonstrate Chain of Table reasoning with multiple tables powered by LangGraph☆148Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆180Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆125Updated last month
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆503Updated 10 months ago
- 🦜💯 Flex those feathers!☆255Updated last year
- Build datasets using natural language☆552Updated 3 months ago
- Ranking LLMs on agentic tasks☆204Updated last month
- ☆79Updated 2 months ago
- ☆74Updated last year
- DSPY on action with OpenSource LLMs.☆102Updated last year
- Research repository on interfacing LLMs with Weaviate APIs. Inspired by the Berkeley Gorilla LLM.☆139Updated 3 months ago
- A Lightweight Library for AI Observability☆252Updated 10 months ago
- RAGArch is a Streamlit-based application that empowers users to experiment with various components and parameters of Retrieval-Augmented …☆87Updated last year
- Testing and evaluation framework for voice agents☆160Updated 6 months ago
- ☆36Updated 7 months ago
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆235Updated 2 months ago
- Automated knowledge graph creation SDK☆122Updated last year