codefuse-ai / MFTCoder
High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.
☆687Updated 4 months ago
Alternatives and similar repositories for MFTCoder
Users that are interested in MFTCoder are comparing it to the libraries listed below
Sorting:
- Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中☆96Updated 2 weeks ago
- High-performance LLM inference based on our optimized version of FastTransfomer☆123Updated last year
- Index of the CodeFuse Repositories☆136Updated 8 months ago
- ☆914Updated 11 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆335Updated 3 weeks ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆554Updated 5 months ago
- ☆318Updated 10 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆240Updated 6 months ago
- LongBench v2 and LongBench (ACL 2024)☆875Updated 4 months ago
- Run evaluation on LLMs using human-eval benchmark☆411Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆365Updated 8 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆261Updated 8 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆758Updated 5 months ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆348Updated last year
- ☆527Updated 4 months ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆270Updated last year
- ☆750Updated 11 months ago
- 🩹Editing large language models within 10 seconds⚡☆1,328Updated last year
- ☆901Updated 9 months ago
- ☆315Updated 8 months ago
- ☆324Updated 11 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆258Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆243Updated 6 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆152Updated this week
- SOTA Math Opensource LLM☆332Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆386Updated 9 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆527Updated 6 months ago
- ☆280Updated 9 months ago
- A generalized information-seeking agent system with Large Language Models (LLMs).☆1,160Updated 10 months ago
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,433Updated last year