The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
☆1,589Jun 3, 2025Updated 8 months ago
Alternatives and similar repositories for LLM-eval-survey
Users that are interested in LLM-eval-survey are comparing it to the libraries listed below
Sorting:
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆793May 8, 2024Updated last year
- ☆922May 22, 2024Updated last year
- The official GitHub page for the survey paper "A Survey of Large Language Models".☆12,094Mar 11, 2025Updated 11 months ago
- A unified evaluation framework for large language models☆2,776Feb 20, 2026Updated last week
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,766Aug 4, 2024Updated last year
- A framework for few-shot evaluation of language models.☆11,478Feb 15, 2026Updated 2 weeks ago
- Aligning Large Language Models with Human: A Survey☆741Sep 11, 2023Updated 2 years ago
- The paper list of the 86-page SCIS cover paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et a…☆8,067Sep 12, 2025Updated 5 months ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,688Updated this week
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,187Feb 8, 2026Updated 3 weeks ago
- ☆2,882Feb 20, 2025Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 5 months ago
- ☆772Jun 13, 2024Updated last year
- Aligning pretrained language models with instruction data generated by themselves.☆4,580Mar 27, 2023Updated 2 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 6 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆612Nov 24, 2025Updated 3 months ago
- Instruction Tuning with GPT-4☆4,342Jun 11, 2023Updated 2 years ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,815Jul 27, 2025Updated 7 months ago
- Latest Advances on Multimodal Large Language Models☆17,355Feb 23, 2026Updated last week
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,693Updated this week
- Secrets of RLHF in Large Language Models Part I: PPO☆1,416Mar 3, 2024Updated 2 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,678Updated this week
- [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.☆5,536May 21, 2025Updated 9 months ago
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,100Oct 5, 2023Updated 2 years ago
- Robust recipes to align language models with human and AI preferences☆5,506Sep 8, 2025Updated 5 months ago
- A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)☆10,160May 31, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡☆2,949Nov 26, 2023Updated 2 years ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,414Jun 2, 2025Updated 9 months ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,723Feb 9, 2026Updated 3 weeks ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- Resource, Evaluation and Detection Papers for ChatGPT☆456Mar 21, 2024Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,585Nov 24, 2025Updated 3 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- 面向中文大模型价值观的评估与对齐研究☆554Jul 20, 2023Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.☆17,929Nov 3, 2025Updated 3 months ago
- FacTool: Factuality Detection in Generative AI☆913Aug 19, 2024Updated last year
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,306Dec 9, 2025Updated 2 months ago