CLUEbenchmark / SuperCLUE-AgentLinks
SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准
☆89Updated last year
Alternatives and similar repositories for SuperCLUE-Agent
Users that are interested in SuperCLUE-Agent are comparing it to the libraries listed below
Sorting:
- ☆145Updated last year
- ☆325Updated last year
- ☆163Updated 2 years ago
- ☆175Updated last year
- ☆98Updated last year
- ☆233Updated last year
- 中文大语言模型评测第二期☆71Updated last year
- ☆128Updated 2 years ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆409Updated last year
- ☆96Updated last year
- 中文原生检索增强生成测评基准☆121Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆88Updated last year
- 中文大语言模型评测第一期☆110Updated last year
- 怎么训练一个LLM分词器☆152Updated 2 years ago
- ☆260Updated 3 months ago
- ☆145Updated last year
- 语言模型中文认知能力分析☆237Updated last year
- Light local website for displaying performances from different chat models.☆87Updated last year
- Imitate OpenAI with Local Models☆89Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆42Updated last year
- 中文通用大模型开放域多轮测评基准 | An Open Domain Benchmark for Foundation Models in Chinese☆79Updated 2 years ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 4 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆132Updated last year
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆161Updated last month
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆102Updated 2 years ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- ☆67Updated 2 years ago