onejune2018 / Awesome-LLM-Eval
Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.
☆497Updated 4 months ago
Alternatives and similar repositories for Awesome-LLM-Eval:
Users that are interested in Awesome-LLM-Eval are comparing it to the libraries listed below
- 大模型多维度中文对齐评测基准 (ACL 2024)☆367Updated 7 months ago
- LongBench v2 and LongBench (ACL 2024)☆805Updated 2 months ago
- 开源SFT数据集整理,随时补充☆500Updated last year
- An Awesome Collection for LLM Survey☆331Updated 6 months ago
- ☆894Updated 7 months ago
- an intro to retrieval augmented large language model☆286Updated last year
- ☆318Updated 8 months ago
- ☆305Updated 10 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆234Updated 4 months ago
- Real-time updated, fine-grained reading list on LLM-synthetic-data.🔥☆238Updated last month
- FlagEval is an evaluation toolkit for AI large foundation models.☆326Updated 8 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆346Updated 6 months ago
- Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.☆753Updated this week
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆277Updated last year
- 面向中文大模型价值观的评估与对齐研究☆498Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆540Updated 3 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆994Updated 3 months ago
- ☆311Updated 8 months ago
- Collection of training data management explorations for large language models☆314Updated 7 months ago
- This is the repository for the Tool Learning survey.☆326Updated 2 weeks ago
- ☆930Updated last month
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆350Updated 11 months ago
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆134Updated last year
- CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models☆285Updated 4 months ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆187Updated 10 months ago
- ☆497Updated 2 months ago
- ☆116Updated last year
- A paper & resource list of large language models, including course, paper, demo, figures☆198Updated last year
- [ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.☆398Updated 2 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆748Updated 3 months ago