PAIR-code / llm-comparatorLinks
LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR team.
☆474Updated 6 months ago
Alternatives and similar repositories for llm-comparator
Users that are interested in llm-comparator are comparing it to the libraries listed below
Sorting:
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆136Updated last week
- Evaluate your LLM's response with Prometheus and GPT4 💯☆981Updated 4 months ago
- Automated Evaluation of RAG Systems☆647Updated 5 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated last year
- A small library of LLM judges☆271Updated 3 weeks ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆294Updated last week
- awesome synthetic (text) datasets☆295Updated last month
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated last month
- Tutorial for building LLM router☆224Updated last year
- A tool for evaluating LLMs☆424Updated last year
- Automatically evaluate your LLMs in Google Colab☆656Updated last year
- This package, developed as part of our research detailed in the Chroma Technical Report, provides tools for text chunking and evaluation.…☆395Updated 5 months ago
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆639Updated 3 weeks ago
- A Lightweight Library for AI Observability☆250Updated 6 months ago
- An open-source tool for general prompt optimization.☆611Updated last week
- Attribute (or cite) statements generated by LLMs back to in-context information.☆274Updated 10 months ago
- Framework for enhancing LLMs for RAG tasks using fine-tuning.☆747Updated 3 months ago
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.☆435Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆174Updated 11 months ago
- A library for prompt engineering and optimization (SAMMO = Structure-aware Multi-Objective Metaprompt Optimization)☆725Updated 2 months ago
- Build datasets using natural language☆518Updated 3 months ago
- An agent benchmark with tasks in a simulated software company.☆534Updated this week
- Easily embed, cluster and semantically label text datasets☆566Updated last year
- wandbot is a technical support bot for Weights & Biases' AI developer tools that can run in Discord, Slack, ChatGPT and Zendesk☆306Updated last week
- 🤗 Benchmark Large Language Models Reliably On Your Data☆387Updated this week
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,053Updated 6 months ago
- Repository to demonstrate Chain of Table reasoning with multiple tables powered by LangGraph☆147Updated last year
- Code for Husky, an open-source language agent that solves complex, multi-step reasoning tasks. Husky v1 addresses numerical, tabular and …☆345Updated last year
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆114Updated last month
- ☆230Updated last month