microsoft / promptbenchLinks
A unified evaluation framework for large language models
☆2,731Updated last week
Alternatives and similar repositories for promptbench
Users that are interested in promptbench are comparing it to the libraries listed below
Sorting:
- A framework for prompt tuning using Intent-based Prompt Calibration☆2,805Updated 6 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,574Updated 4 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,864Updated last week
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,213Updated last year
- prompt2model - Generate Deployable Models from Natural Language Instructions☆2,008Updated 9 months ago
- Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"☆2,505Updated 10 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,877Updated 2 months ago
- ☆2,091Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,151Updated 3 months ago
- ☆1,316Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,748Updated last year
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,428Updated 4 months ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,771Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,463Updated last year
- A comprehensive guide to building RAG-based LLM applications for production.☆1,835Updated last year
- Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)☆1,956Updated last year
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,499Updated 7 months ago
- Robust recipes to align language models with human and AI preferences☆5,398Updated last month
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,591Updated last week
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆785Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,519Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,056Updated last year
- [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.☆5,286Updated 5 months ago
- FacTool: Factuality Detection in Generative AI☆891Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,098Updated last year
- ☆2,853Updated 8 months ago
- MTEB: Massive Text Embedding Benchmark☆2,898Updated this week
- An Open-source Toolkit for LLM Development☆2,789Updated 9 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,005Updated 5 months ago
- ☆1,042Updated 2 years ago