onejune2018 / Awesome-LLM-Eval
Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.
☆475Updated 3 months ago
Alternatives and similar repositories for Awesome-LLM-Eval:
Users that are interested in Awesome-LLM-Eval are comparing it to the libraries listed below
- 大模型多维度中文对齐评测基准 (ACL 2024)☆357Updated 6 months ago
- LongBench v2 and LongBench (ACL 2024)☆777Updated last month
- Collection of training data management explorations for large language models☆307Updated 6 months ago
- ☆904Updated 8 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆232Updated 3 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆336Updated 5 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆319Updated 7 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆723Updated 2 months ago
- ☆318Updated 7 months ago
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆413Updated this week
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆533Updated 2 months ago
- an intro to retrieval augmented large language model☆274Updated last year
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆131Updated last year
- 开源SFT数据集整理,随时补充☆481Updated last year
- ☆889Updated 6 months ago
- 面向中文大模型价值观的评估与对齐研究☆491Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆977Updated 2 months ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆255Updated 6 months ago
- An Awesome Collection for LLM Survey☆326Updated 5 months ago
- ☆300Updated 8 months ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆276Updated last year
- The related works and background techniques about Openai o1☆208Updated last month
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆349Updated 10 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆592Updated last month
- Real-time updated, fine-grained reading list on LLM-synthetic-data.🔥☆209Updated 3 weeks ago
- Collaborative Training of Large Language Models in an Efficient Way☆411Updated 5 months ago
- ☆163Updated last year
- ☆727Updated 8 months ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆256Updated 2 months ago
- ☆307Updated 7 months ago