onejune2018 / Awesome-LLM-EvalLinks
Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.
☆552Updated 8 months ago
Alternatives and similar repositories for Awesome-LLM-Eval
Users that are interested in Awesome-LLM-Eval are comparing it to the libraries listed below
Sorting:
- 大模型多维度中文对齐评测基准 (ACL 2024)☆398Updated 11 months ago
- an intro to retrieval augmented large language model☆297Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆338Updated 2 months ago
- Collection of training data management explorations for large language models☆327Updated 11 months ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆349Updated last year
- ☆324Updated last year
- ☆905Updated 11 months ago
- A live reading list for LLM-synthetic-data.☆308Updated last week
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆244Updated 8 months ago
- LongBench v2 and LongBench (ACL 25'&24')☆926Updated 6 months ago
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆143Updated last year
- 面向中文大模型价值观的评估与对齐研究☆525Updated last year
- ☆918Updated last year
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆281Updated last year
- An Awesome Collection for LLM Survey☆371Updated last month
- 开源SFT数据集整理,随时补充☆526Updated 2 years ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆379Updated 3 weeks ago
- ☆946Updated 5 months ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆260Updated 7 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆669Updated 6 months ago
- CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models☆321Updated last month
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆558Updated 7 months ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆329Updated 11 months ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆280Updated last year
- This is the repository for the Tool Learning survey.☆404Updated last month
- ☆543Updated 6 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,026Updated 7 months ago
- ☆324Updated last year
- ☆172Updated last year
- 语言模型中文认知能力分析☆236Updated last year