CLUEbenchmark / SuperCLUE-AgentLinks
SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准
☆91Updated last year
Alternatives and similar repositories for SuperCLUE-Agent
Users that are interested in SuperCLUE-Agent are comparing it to the libraries listed below
Sorting:
- ☆163Updated 2 years ago
- ☆147Updated last year
- ☆98Updated last year
- ☆325Updated last year
- ☆175Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆412Updated last year
- 中文原生检索增强生成测评基准☆122Updated last year
- ☆230Updated last year
- Light local website for displaying performances from different chat models.☆87Updated last year
- ☆127Updated 2 years ago
- 中文大语言模型评测第二期☆71Updated last year
- 怎么训练一个LLM分词器☆152Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- ☆172Updated 2 years ago
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆161Updated last month
- 中文大语言模型评测第一期☆110Updated last year
- ☆96Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆88Updated last year
- Imitate OpenAI with Local Models☆88Updated last year
- ☆262Updated 3 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆248Updated 10 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆337Updated 4 months ago
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆101Updated 2 years ago
- ☆308Updated 2 years ago
- SOTA Math Opensource LLM☆333Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆134Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆272Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆42Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year