codefuse-ai / MFTCoderLinks
High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.
☆704Updated 11 months ago
Alternatives and similar repositories for MFTCoder
Users that are interested in MFTCoder are comparing it to the libraries listed below
Sorting:
- Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中☆104Updated 7 months ago
- Index of the CodeFuse Repositories☆137Updated last year
- High-performance LLM inference based on our optimized version of FastTransfomer☆122Updated 2 years ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 7 months ago
- Yuan 2.0 Large Language Model☆689Updated last year
- A generalized information-seeking agent system with Large Language Models (LLMs).☆1,196Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆795Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,470Updated 2 years ago
- ☆330Updated last year
- ☆923Updated last year
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆582Updated 2 weeks ago
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆645Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆1,038Updated 10 months ago
- 🩹Editing large language models within 10 seconds⚡☆1,356Updated 2 years ago
- ☆180Updated last week
- ☆232Updated last year
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆157Updated last year
- ☆768Updated last year
- ☆482Updated last year
- Inference code of Lingma SWE-GPT☆252Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆773Updated last year
- ☆914Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆423Updated last month
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆619Updated 10 months ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,789Updated 4 months ago
- SOTA Math Opensource LLM☆332Updated 2 years ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆346Updated last year
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆155Updated 11 months ago