aiverify-foundation / LLM-Evals-Catalogue
This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation frameworks, benchmarks and papers.
☆14Updated last year
Alternatives and similar repositories for LLM-Evals-Catalogue:
Users that are interested in LLM-Evals-Catalogue are comparing it to the libraries listed below
- Sample notebooks and prompts for LLM evaluation☆119Updated 2 months ago
- An index of all of our weekly concepts + code events for aspiring AI Engineers and Business Leaders!!☆58Updated last week
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆87Updated this week
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆101Updated 9 months ago
- This is the reproduction repository for my 🤗 Hugging Face blog post on synthetic data☆63Updated 11 months ago
- Notebooks and articles related to LLMs☆25Updated last year
- A notebook based tutorial series on buildling a LLM from scratch☆24Updated 4 months ago
- ☆138Updated 6 months ago
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆145Updated this week
- Build Enterprise RAG (Retriver Augmented Generation) Pipelines to tackle various Generative AI use cases with LLM's by simply plugging co…☆109Updated 6 months ago
- Building a chatbot powered with a RAG pipeline to read,summarize and quote the most relevant papers related to the user query.☆165Updated 9 months ago
- A collection of fine-tuning notebooks!☆26Updated last year
- ☆18Updated 9 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆105Updated 4 months ago
- ☆71Updated 2 weeks ago
- Fiddler Auditor is a tool to evaluate language models.☆174Updated 10 months ago
- ☆76Updated 7 months ago
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆106Updated last month
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆48Updated 6 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆98Updated 4 months ago
- Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning☆44Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆91Updated last month
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆104Updated this week
- ☆85Updated 5 months ago
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆123Updated last year
- This package, developed as part of our research detailed in the Chroma Technical Report, provides tools for text chunking and evaluation.…☆213Updated 4 months ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆117Updated last year
- Mistral + Haystack: build RAG pipelines that rock 🤘☆100Updated 11 months ago
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 4 months ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆260Updated this week