PAIR-code / llm-comparatorLinks
LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR team.
☆518Updated 11 months ago
Alternatives and similar repositories for llm-comparator
Users that are interested in llm-comparator are comparing it to the libraries listed below
Sorting:
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆176Updated 2 weeks ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆426Updated last month
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,038Updated 9 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆277Updated last year
- Tutorial for building LLM router☆244Updated last year
- Automated Evaluation of RAG Systems☆687Updated 10 months ago
- A small library of LLM judges☆319Updated 6 months ago
- awesome synthetic (text) datasets☆321Updated 3 weeks ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆302Updated last month
- ☆237Updated 2 months ago
- A Lightweight Library for AI Observability☆255Updated 11 months ago
- Automatically evaluate your LLMs in Google Colab☆685Updated last year
- 👩🏻🍳 A collection of example notebooks using Haystack☆523Updated last week
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.☆446Updated last year
- An open-source tool for LLM prompt optimization.☆759Updated last week
- Sample notebooks and prompts for LLM evaluation☆159Updated 3 months ago
- ☆250Updated last year
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆665Updated 2 weeks ago
- Ranking LLMs on agentic tasks☆210Updated 2 months ago
- wandbot is a technical support bot for Weights & Biases' AI developer tools that can run in Discord, Slack, ChatGPT and Zendesk☆309Updated 3 months ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆319Updated last year
- A tool for evaluating LLMs☆428Updated last year
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆194Updated 5 months ago
- Task-based Agentic Framework using StrictJSON as the core☆460Updated 2 months ago
- ☆147Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆184Updated last year
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆598Updated last year
- An Awesome list of curated DSPy resources.☆511Updated last month
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆116Updated 6 months ago
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 6 months ago