onejune2018 / Awesome-LLM-EvalLinks
Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.
☆578Updated 2 months ago
Alternatives and similar repositories for Awesome-LLM-Eval
Users that are interested in Awesome-LLM-Eval are comparing it to the libraries listed below
Sorting:
- 大模型多维度中文对齐评测基准 (ACL 2024)☆418Updated 3 weeks ago
- an intro to retrieval augmented large language model☆304Updated 2 years ago
- ☆910Updated last year
- A live reading list for LLM data synthesis (Updated to July, 2025).☆408Updated 2 months ago
- An Awesome Collection for LLM Survey☆378Updated 5 months ago
- ☆922Updated last year
- 开源SFT数据集整理,随时补充☆557Updated 2 years ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,010Updated 10 months ago
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆153Updated last year
- Collection of training data management explorations for large language models☆335Updated last year
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆285Updated 2 years ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆346Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆400Updated 4 months ago
- 面向中文大模型价值观的评估与对齐研究☆544Updated 2 years ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 6 months ago
- ☆330Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- ☆967Updated 9 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆575Updated 11 months ago
- CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models☆343Updated 5 months ago
- This is the repository for the Tool Learning survey.☆455Updated 3 months ago
- Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.☆1,054Updated 4 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆692Updated 10 months ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆351Updated last year
- ☆346Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,060Updated last month
- ☆321Updated last year
- ☆767Updated last year
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆297Updated last year
- ☆548Updated 10 months ago