CLUEbenchmark / SuperCLUE-AgentLinks
SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准
☆94Updated 2 years ago
Alternatives and similar repositories for SuperCLUE-Agent
Users that are interested in SuperCLUE-Agent are comparing it to the libraries listed below
Sorting:
- ☆164Updated 2 years ago
- ☆147Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆421Updated 3 months ago
- ☆98Updated last year
- ☆334Updated last year
- ☆184Updated 2 years ago
- ☆237Updated last year
- 中文原生检索增强生成测评基准☆124Updated last year
- ☆129Updated 2 years ago
- 怎么训练一个LLM分词器☆153Updated 2 years ago
- 中文大语言模型评测第二期☆71Updated 2 years ago
- ☆99Updated 2 years ago
- Light local website for displaying performances from different chat models.☆87Updated 2 years ago
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆161Updated 6 months ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆89Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆177Updated 2 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆136Updated last year
- 语言模型中文认知能力分析☆235Updated 2 years ago
- Imitate OpenAI with Local Models☆90Updated last year
- A large-scale language model for scientific domain, trained on redpajama arXiv split☆137Updated last year
- ☆106Updated 2 years ago
- 中文通用大模型开放域多轮测评基准 | An Open Domain Benchmark for Foundation Models in Chinese☆81Updated 2 years ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 9 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆256Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆48Updated last year
- ☆313Updated 2 years ago
- ☆173Updated 2 years ago
- 中文大语言模型评测第一期☆110Updated 2 years ago
- ☆283Updated 8 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago