flageval-baai / FlagEvalLinks
FlagEval is an evaluation toolkit for AI large foundation models.
☆339Updated 4 months ago
Alternatives and similar repositories for FlagEval
Users that are interested in FlagEval are comparing it to the libraries listed below
Sorting:
- 大模型多维度中文对齐评测基准 (ACL 2024)☆411Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated 11 months ago
- ☆325Updated last year
- 面向中文大模型价值观的评估与对齐研究☆535Updated 2 years ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆678Updated 8 months ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆238Updated 2 years ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆261Updated 9 months ago
- ☆309Updated 2 years ago
- ☆353Updated last year
- 语言模型中文认知能力分析☆237Updated 2 years ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆781Updated 9 months ago
- A Chinese Open-Domain Dialogue System☆323Updated 2 years ago
- ☆460Updated last year
- ☆230Updated 2 years ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆90Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆612Updated 7 months ago
- ☆176Updated last year
- 开源SFT数据集整理,随时补充☆540Updated 2 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆606Updated 2 weeks ago
- ☆281Updated last year
- ☆231Updated last year
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆347Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆389Updated 2 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆416Updated last year
- ☆128Updated 2 years ago
- Naive Bayes-based Context Extension☆326Updated 9 months ago
- ☆173Updated this week
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆247Updated 10 months ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆413Updated last year