microsoft / promptbench
A unified evaluation framework for large language models
☆2,532Updated last week
Alternatives and similar repositories for promptbench:
Users that are interested in promptbench are comparing it to the libraries listed below
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆4,879Updated 3 weeks ago
- ☆1,949Updated 9 months ago
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆1,976Updated 8 months ago
- Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"☆2,282Updated 2 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,239Updated last month
- [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.☆4,889Updated 3 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,374Updated 3 weeks ago
- A framework for prompt tuning using Intent-based Prompt Calibration☆2,365Updated 2 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,654Updated last month
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,485Updated 8 months ago
- A comprehensive guide to building RAG-based LLM applications for production.☆1,762Updated 6 months ago
- Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)☆1,720Updated 11 months ago
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,266Updated last week
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,609Updated 7 months ago
- [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models☆2,266Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,668Updated 6 months ago
- Supercharge Your LLM Application Evaluations 🚀☆8,214Updated this week
- MTEB: Massive Text Embedding Benchmark☆2,193Updated this week
- ☆2,521Updated 9 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,712Updated 6 months ago
- Tools for merging pretrained large language models.☆5,260Updated last week
- [IJCAI 2024] Generate different roles for GPTs to form a collaborative entity for complex tasks.☆1,283Updated 10 months ago
- ☆1,214Updated 9 months ago
- All things prompt engineering☆5,534Updated 8 months ago
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,386Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,845Updated last month
- [COLM 2024] OpenAgents: An Open Platform for Language Agents in the Wild☆4,137Updated 3 months ago
- Robust recipes to align language models with human and AI preferences☆5,001Updated 2 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,305Updated last year
- ☆4,058Updated 8 months ago