codefuse-ai / MFTCoderLinks
High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.
☆705Updated 8 months ago
Alternatives and similar repositories for MFTCoder
Users that are interested in MFTCoder are comparing it to the libraries listed below
Sorting:
- Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中☆101Updated 4 months ago
- High-performance LLM inference based on our optimized version of FastTransfomer☆124Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆337Updated 4 months ago
- ☆923Updated last year
- 🩹Editing large language models within 10 seconds⚡☆1,345Updated 2 years ago
- Yuan 2.0 Large Language Model☆689Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,463Updated last year
- Index of the CodeFuse Repositories☆138Updated last year
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆571Updated 3 weeks ago
- ☆763Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆783Updated 9 months ago
- ☆325Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆613Updated 7 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated 11 months ago
- Run evaluation on LLMs using human-eval benchmark☆419Updated 2 years ago
- Accelerate inference without tears☆324Updated 6 months ago
- ☆174Updated this week
- LongBench v2 and LongBench (ACL 25'&24')☆970Updated 8 months ago
- A generalized information-seeking agent system with Large Language Models (LLMs).☆1,185Updated last year
- ☆473Updated last year
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆347Updated last year
- 面向中文大模型价值观的评估与对齐研究☆536Updated 2 years ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆412Updated last year
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆645Updated last year
- SOTA Math Opensource LLM☆333Updated last year
- A framework for the evaluation of autoregressive code generation language models.☆979Updated 2 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆569Updated 9 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆680Updated 8 months ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆91Updated last year
- ☆908Updated last year