codefuse-ai / MFTCoderLinks
High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.
☆702Updated 9 months ago
Alternatives and similar repositories for MFTCoder
Users that are interested in MFTCoder are comparing it to the libraries listed below
Sorting:
- Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中☆101Updated 5 months ago
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,462Updated last year
- High-performance LLM inference based on our optimized version of FastTransfomer☆123Updated last year
- A generalized information-seeking agent system with Large Language Models (LLMs).☆1,188Updated last year
- Yuan 2.0 Large Language Model☆688Updated last year
- Index of the CodeFuse Repositories☆137Updated last year
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆571Updated last month
- 🩹Editing large language models within 10 seconds⚡☆1,348Updated 2 years ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆784Updated 10 months ago
- ☆922Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆338Updated 5 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,772Updated 2 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆413Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆614Updated 8 months ago
- ☆764Updated last year
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆645Updated last year
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆686Updated 9 months ago
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆149Updated last year
- ☆327Updated last year
- ☆231Updated last year
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆262Updated 10 months ago
- LongBench v2 and LongBench (ACL 25'&24')☆983Updated 8 months ago
- ☆908Updated last year
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆347Updated last year
- An LLM-based Web Navigating Agent (KDD'24)☆890Updated last year
- Inference code of Lingma SWE-GPT☆244Updated 10 months ago
- Run evaluation on LLMs using human-eval benchmark☆420Updated 2 years ago
- ☆354Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆610Updated last month