flageval-baai / FlagEvalLinks
FlagEval is an evaluation toolkit for AI large foundation models.
☆339Updated 8 months ago
Alternatives and similar repositories for FlagEval
Users that are interested in FlagEval are comparing it to the libraries listed below
Sorting:
- 大模型多维度中文对齐评测基准 (ACL 2024)☆421Updated 2 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated last year
- ☆333Updated last year
- 面向中文大模型价值观的评估与对齐研究☆551Updated 2 years ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆698Updated last year
- ☆231Updated 2 years ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆799Updated last year
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆239Updated 2 years ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆94Updated 2 years ago
- ☆235Updated last year
- 语言模型中文认知能力分析☆236Updated 2 years ago
- ☆183Updated 2 years ago
- ☆360Updated last year
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆262Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- ☆314Updated 2 years ago
- ☆459Updated last year
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆414Updated 2 years ago
- A Chinese Open-Domain Dialogue System☆327Updated 2 years ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- 开源SFT数据集整理,随时补充☆566Updated 2 years ago
- ☆282Updated last year
- SOTA Math Opensource LLM☆333Updated 2 years ago
- ☆129Updated 2 years ago
- ☆181Updated last week
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆619Updated 11 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆597Updated last month
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆302Updated last year
- 活字通用大模型☆390Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago