PAIR-code / llm-comparatorLinks
LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR team.
☆494Updated 8 months ago
Alternatives and similar repositories for llm-comparator
Users that are interested in llm-comparator are comparing it to the libraries listed below
Sorting:
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆147Updated 3 weeks ago
- Automatically evaluate your LLMs in Google Colab☆664Updated last year
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,007Updated 6 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆275Updated last year
- awesome synthetic (text) datasets☆305Updated 4 months ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆296Updated last year
- A Lightweight Library for AI Observability☆251Updated 8 months ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆411Updated last month
- Automated Evaluation of RAG Systems☆667Updated 7 months ago
- Tutorial for building LLM router☆233Updated last year
- A small library of LLM judges☆299Updated 3 months ago
- Build datasets using natural language☆543Updated last month
- End-to-end Generative Optimization for AI Agents☆670Updated 2 months ago
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 3 months ago
- Training Model Behavior in Agentic Systems☆651Updated this week
- Ranking LLMs on agentic tasks☆198Updated last month
- Sample notebooks and prompts for LLM evaluation☆153Updated last week
- An open-source tool for LLM prompt optimization.☆698Updated this week
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆299Updated this week
- 📝 Automatically annotate papers using LLMs☆359Updated 6 months ago
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆649Updated 3 months ago
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.☆443Updated last year
- A tool for evaluating LLMs☆425Updated last year
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆187Updated 2 months ago
- wandbot is a technical support bot for Weights & Biases' AI developer tools that can run in Discord, Slack, ChatGPT and Zendesk☆310Updated 2 weeks ago
- Building a chatbot powered with a RAG pipeline to read,summarize and quote the most relevant papers related to the user query.☆168Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆179Updated last year
- ☆231Updated 4 months ago
- An Awesome list of curated DSPy resources.☆464Updated last month
- Repository to demonstrate Chain of Table reasoning with multiple tables powered by LangGraph☆146Updated last year