CLUEbenchmark / SuperCLUE-AgentLinks
SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准
☆93Updated last year
Alternatives and similar repositories for SuperCLUE-Agent
Users that are interested in SuperCLUE-Agent are comparing it to the libraries listed below
Sorting:
- ☆163Updated 2 years ago
- ☆147Updated last year
- ☆330Updated last year
- ☆233Updated last year
- ☆179Updated last year
- ☆98Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆417Updated last week
- ☆128Updated 2 years ago
- 怎么训练一个LLM分词器☆153Updated 2 years ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆45Updated last year
- 中文大语言模型评测第二期☆71Updated 2 years ago
- 文本去重☆76Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- ☆97Updated last year
- Imitate OpenAI with Local Models☆88Updated last year
- 中文通用大模型开放域多轮测评基准 | An Open Domain Benchmark for Foundation Models in Chinese☆80Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆251Updated last year
- 中文大语言模型评测第一期☆110Updated 2 years ago
- 中文原生检索增强生成测评基准☆123Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆89Updated last year
- 中文 Instruction tuning datasets☆137Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆102Updated 2 years ago
- Light local website for displaying performances from different chat models.☆87Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆338Updated 6 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆135Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆138Updated 10 months ago
- ☆172Updated 2 years ago
- ☆68Updated 2 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago