flageval-baai / FlagEvalLinks
FlagEval is an evaluation toolkit for AI large foundation models.
☆337Updated last month
Alternatives and similar repositories for FlagEval
Users that are interested in FlagEval are comparing it to the libraries listed below
Sorting:
- 大模型多维度中文对齐评测基准 (ACL 2024)☆392Updated 10 months ago
- 语言模型中文认知能力分析☆236Updated last year
- ☆224Updated last year
- 面向中文大模型价值观的评估与对齐研究☆522Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆442Updated 8 months ago
- ☆308Updated 2 years ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆659Updated 5 months ago
- 开源SFT数据集整理,随时补充☆522Updated 2 years ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆372Updated 9 months ago
- ☆338Updated last year
- ☆322Updated 11 months ago
- Baichuan2代码的逐行解析版本,适合小白☆214Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆765Updated 6 months ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆411Updated last year
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆89Updated last year
- ☆319Updated 11 months ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆236Updated 2 years ago
- ☆169Updated last year
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆261Updated 6 months ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆654Updated 2 years ago
- ☆281Updated last year
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆418Updated 2 years ago
- pCLUE: 1000000+多任务提示学习数据集☆495Updated 2 years ago
- chatglm多gpu用deepspeed和☆409Updated 11 months ago
- A Chinese Open-Domain Dialogue System☆321Updated last year
- ☆128Updated 2 years ago
- ☆459Updated last year
- alpaca中文指令微调数据集☆392Updated 2 years ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆349Updated last year
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆541Updated 7 months ago