onejune2018 / Awesome-LLM-EvalLinks
Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.
☆568Updated 2 weeks ago
Alternatives and similar repositories for Awesome-LLM-Eval
Users that are interested in Awesome-LLM-Eval are comparing it to the libraries listed below
Sorting:
- 大模型多维度中文对齐评测基准 (ACL 2024)☆411Updated last year
- an intro to retrieval augmented large language model☆301Updated 2 years ago
- ☆908Updated last year
- A live reading list for LLM data synthesis (Updated to July, 2025).☆374Updated 3 weeks ago
- An Awesome Collection for LLM Survey☆378Updated 3 months ago
- Collection of training data management explorations for large language models☆334Updated last year
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆148Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆247Updated 10 months ago
- 开源SFT数据集整理,随时补充☆540Updated 2 years ago
- 面向中文大模型价值观的评估与对齐研究☆535Updated 2 years ago
- ☆922Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆393Updated 2 months ago
- CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models☆330Updated 3 months ago
- ☆325Updated last year
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆283Updated 2 years ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆337Updated 4 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆566Updated 9 months ago
- ☆962Updated 7 months ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆347Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆963Updated 8 months ago
- Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.☆1,029Updated 2 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆679Updated 8 months ago
- ☆322Updated last year
- ☆547Updated 8 months ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆210Updated last year
- This is the repository for the Tool Learning survey.☆431Updated last month
- ☆175Updated last year
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆261Updated 9 months ago
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆82Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆266Updated last year