philschmid / evaluate-llmsLinks
Includes examples on how to evaluate LLMs
☆23Updated last year
Alternatives and similar repositories for evaluate-llms
Users that are interested in evaluate-llms are comparing it to the libraries listed below
Sorting:
- Sample notebooks and prompts for LLM evaluation☆156Updated last month
- ☆80Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- ☆148Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆115Updated 6 months ago
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆162Updated 2 weeks ago
- A semantic research engine to get relevant papers based on a user query. Application frontend with Chainlit Copilot. Observability with L…☆82Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆119Updated 8 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆116Updated 4 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆137Updated this week
- An index of all of our weekly concepts + code events for aspiring AI Engineers and Business Leaders!!☆94Updated this week
- Building a chatbot powered with a RAG pipeline to read,summarize and quote the most relevant papers related to the user query.☆166Updated last year
- Data extraction with LLM on CPU☆112Updated last year
- Optimized Large Language Models for Financial Applications – Efficient, Scalable, and Domain-Specific AI for Finance.☆50Updated 5 months ago
- Mistral + Haystack: build RAG pipelines that rock 🤘☆106Updated last year
- Recipes and resources for building, deploying, and fine-tuning generative AI with Fireworks.☆130Updated this week
- ☆103Updated 8 months ago
- Examples of using Evidently to evaluate, test and monitor ML models.☆43Updated last week
- Using open source LLMs to build synthetic datasets for direct preference optimization☆71Updated last year
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆114Updated last year
- ☆125Updated 9 months ago
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆115Updated 8 months ago
- This repo is the central repo for all the RAG Evaluation reference material and partner workshop☆77Updated 7 months ago
- Code for Medium blog posts☆101Updated last month
- Scripts, notebooks, and articles about data science in general.☆53Updated 2 years ago
- ☆15Updated 2 years ago
- This repository contains a pipeline for fine-tuning Large Language Models (LLMs) for Text-to-SQL conversion using General Reward Proximal…☆39Updated 8 months ago
- A collection of hand on notebook for LLMs practitioner☆51Updated 11 months ago
- ☆89Updated 2 years ago