The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.
☆799May 8, 2024Updated last year
Alternatives and similar repositories for Awesome-LLMs-Evaluation-Papers
Users that are interested in Awesome-LLMs-Evaluation-Papers are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,595Jun 3, 2025Updated 10 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆631Nov 24, 2025Updated 4 months ago
- A framework for few-shot evaluation of language models.☆12,138Apr 8, 2026Updated last week
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆1,005May 21, 2025Updated 10 months ago
- A unified evaluation framework for large language models☆2,798Feb 20, 2026Updated last month
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,966Aug 9, 2025Updated 8 months ago
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆623Jun 24, 2025Updated 9 months ago
- The official GitHub page for the survey paper "A Survey of Large Language Models".☆12,139Mar 11, 2025Updated last year
- The paper list of the 86-page SCIS cover paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et a…☆8,105Sep 12, 2025Updated 7 months ago
- ☆2,895Feb 20, 2025Updated last year
- ✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models☆655Jun 16, 2025Updated 10 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,334Feb 8, 2026Updated 2 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆425Oct 25, 2025Updated 5 months ago
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆497Jan 16, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆341Mar 28, 2026Updated 3 weeks ago
- AI Alignment: A Comprehensive Survey☆137Nov 2, 2023Updated 2 years ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,770Aug 4, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,558Apr 8, 2026Updated last week
- From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓☆3,591May 7, 2025Updated 11 months ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,866Updated this week
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,080Sep 27, 2025Updated 6 months ago
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆509Apr 4, 2024Updated 2 years ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,741Apr 10, 2026Updated last week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- RewardBench: the first evaluation tool for reward models.☆707Feb 16, 2026Updated 2 months ago
- A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.☆6,906Dec 17, 2025Updated 4 months ago
- Aligning Large Language Models with Human: A Survey☆742Sep 11, 2023Updated 2 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,837Jun 17, 2025Updated 10 months ago
- Chinese Generation Evaluation☆13Aug 14, 2023Updated 2 years ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,374Apr 7, 2026Updated last week
- Benchmarking LLMs with Challenging Tasks from Real Users☆248Nov 3, 2024Updated last year
- ☆2,128May 8, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,340Updated this week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- FlagEval is an evaluation toolkit for AI large foundation models.☆337Apr 24, 2025Updated 11 months ago
- Summarize existing representative LLMs text datasets.☆1,453Mar 11, 2026Updated last month
- Awesome LLM for NLG Evaluation Papers☆26Jan 23, 2024Updated 2 years ago
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.☆18,227Updated this week
- Awesome things about LLM-powered agents. Papers / Repos / Blogs / ...☆2,220Apr 30, 2025Updated 11 months ago
- Latest Advances on Multimodal Large Language Models☆17,624Apr 9, 2026Updated last week
- A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)☆10,157Apr 8, 2026Updated last week