yongzhuo / LLM-SFTView external linksLinks
中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微调, 推理, 测评, 接口)等.
☆217May 17, 2024Updated last year
Alternatives and similar repositories for LLM-SFT
Users that are interested in LLM-SFT are comparing it to the libraries listed below
Sorting:
- 开源SFT数据集整理,随时补充☆569Jun 2, 2023Updated 2 years ago
- 从零构建了Agent中最重要的功能-function call☆17Oct 16, 2024Updated last year
- Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、…☆6,635Oct 24, 2024Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆416Jun 25, 2025Updated 7 months ago
- ☆87Dec 29, 2023Updated 2 years ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆216Oct 6, 2024Updated last year
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 7 months ago
- 本项目提供了面向中文的XLNet预训练模型,旨在丰富中文自然语言处理资源,提供多元化的中文预训练模型选择。 我们欢迎各位专家学者下载使用,并共同促进和发展中文资源建设。☆11May 30, 2023Updated 2 years ago
- Python package for compressing floating-point PyTorch tensors☆13Jul 22, 2024Updated last year
- MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO、GRPO。☆4,761Feb 10, 2026Updated last week
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆100Sep 14, 2024Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆177Jan 4, 2024Updated 2 years ago
- LLM+RAG for QA☆22Jan 15, 2024Updated 2 years ago
- The code of “Prototypical Graph Contrastive Learning”. [TNNLS 2022]☆24Aug 21, 2022Updated 3 years ago
- 通用简单工具项目☆22Oct 6, 2024Updated last year
- Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调☆3,730Oct 12, 2023Updated 2 years ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆416Oct 21, 2023Updated 2 years ago
- [CIKM 2025] Constraint Back-translation Improves Complex Instruction Following of Large Language Models☆17May 23, 2025Updated 8 months ago
- 使用指令微调对大模型进行微调。☆11Jun 28, 2023Updated 2 years ago
- InternLM-7B微调, SFT/LoRA, instruction finetune☆13May 17, 2024Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Mar 27, 2023Updated 2 years ago
- 基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等☆2,776Dec 12, 2023Updated 2 years ago
- Pico is a numpy-based "pico" neural network framework, with torch-like coding style and auto-grad implementation., with MNIST example.☆11Mar 11, 2022Updated 3 years ago
- ⛏️This is the storage of my Slides、Reports and Papers. | 存储PPT、报告和论文☆12Oct 27, 2024Updated last year
- Showing the relationship between ImageNet ID and labels and pytorch pre-trained model output ID and labels☆10Oct 11, 2020Updated 5 years ago
- (NBCE)Naive Bayes-based Context Extension on ChatGLM-6b☆15Jun 7, 2023Updated 2 years ago
- Generate dialog data from documents using LLM like ChatGLM2 or ChatGPT;利用ChatGLM2,ChatGPT等大模型根据文档生成对话数据集☆163Oct 25, 2023Updated 2 years ago
- 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)☆18,964Jul 15, 2025Updated 7 months ago
- [ICML 2025] Logits are All We Need to Adapt Closed Models☆21May 2, 2025Updated 9 months ago
- 利用大模型LLM对中文文本、图片以及pdf中的非结构化文本内容进行分析,并提取主-谓-宾(SPO)三元组的知识形式,以及将这些关系可视化为知识图谱。The large LLM model is used to analyze the unstructured text co…☆23Apr 16, 2025Updated 10 months ago
- ☆21Jul 3, 2025Updated 7 months ago
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,281Oct 16, 2024Updated last year
- 本项目旨在收集开源的表格智能任务数据集(比如表格问答、表格-文本生成等),将原始数据整理为指令微调格式的数据并微调LLM,进而增强LLM对于表格数据的理解,最终构建出专门面向表格智能任务的大型语言模型。☆638Apr 22, 2024Updated last year
- Gemma-SFT, gemma-2b/gemma-7b微调(finetune,transformers)/LORA(peft)/推理(inference)☆33May 17, 2024Updated last year
- A large-scale 7B pretraining language model developed by BaiChuan-Inc.☆5,686Jul 18, 2024Updated last year
- ☆16Jun 3, 2025Updated 8 months ago
- 使用单个24G显卡,从0开始训练LLM☆56Jul 9, 2025Updated 7 months ago
- stay tuned.☆17Jul 7, 2025Updated 7 months ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,799Dec 12, 2023Updated 2 years ago