codefuse-ai / codefuse-evaluation
Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中
☆86Updated last year
Alternatives and similar repositories for codefuse-evaluation:
Users that are interested in codefuse-evaluation are comparing it to the libraries listed below
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆131Updated last month
- 代码大模型 预训练&微调&DPO 数据处理 业界处理pipeline sota☆33Updated 6 months ago
- ☆54Updated last month
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆109Updated 3 months ago
- A collection of practical code generation tasks and tests from open source projects. Complementary to HumanEval by OpenAI.☆24Updated 2 years ago
- ☆139Updated 7 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆233Updated 3 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆62Updated 7 months ago
- Inference code of Lingma SWE-GPT☆188Updated 2 months ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆81Updated last year
- ☆307Updated 7 months ago
- ☆17Updated last month
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆130Updated 6 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆359Updated 6 months ago
- ☆159Updated last year
- High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.☆664Updated last month
- ☆30Updated 8 months ago
- 怎么训练一个LLM分词器☆140Updated last year
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆50Updated 6 months ago
- ☆94Updated last year
- ☆121Updated 2 weeks ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆118Updated 8 months ago
- large language model training-3-stages+deployment☆47Updated last year
- CodeGPT: A Code-Related Dialogue Dataset Generated by GPT and for GPT☆112Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆62Updated 4 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆319Updated 7 months ago
- ☆130Updated 10 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆233Updated 3 months ago
- CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models☆273Updated 3 months ago
- ☆121Updated last year