codefuse-ai / MFTCoder
High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.
☆664Updated last month
Alternatives and similar repositories for MFTCoder:
Users that are interested in MFTCoder are comparing it to the libraries listed below
- High-performance LLM inference based on our optimized version of FastTransfomer☆124Updated last year
- Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中☆86Updated last year
- Index of the CodeFuse Repositories☆136Updated 5 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆319Updated 7 months ago
- ☆307Updated 7 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆440Updated 4 months ago
- LongBench v2 and LongBench (ACL 2024)☆782Updated last month
- ☆476Updated last month
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,388Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆233Updated 3 months ago
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆442Updated this week
- CMMLU: Measuring massive multitask language understanding in Chinese☆726Updated 2 months ago
- 🩹Editing large language models within 10 seconds⚡☆1,310Updated last year
- Run evaluation on LLMs using human-eval benchmark☆395Updated last year
- ☆903Updated 9 months ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆131Updated last month
- 大模型多维度中文对齐评测基准 (ACL 2024)☆359Updated 6 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆480Updated 3 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆577Updated 7 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆534Updated 2 months ago
- Yuan 2.0 Large Language Model☆683Updated 7 months ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆259Updated 10 months ago
- ☆153Updated this week
- ☆889Updated 6 months ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆349Updated 10 months ago
- ☆209Updated 9 months ago
- ☆299Updated 8 months ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,691Updated last year
- ☆304Updated 5 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆604Updated last month