qiufengqijun / mini_qwenView external linksLinks
这是一个从头训练大语言模型的项目,包括预训练、微调和直接偏好优化,模型拥有1B参数,支持中英文。
☆744Feb 18, 2025Updated 11 months ago
Alternatives and similar repositories for mini_qwen
Users that are interested in mini_qwen are comparing it to the libraries listed below
Sorting:
- Train a 1B LLM with 1T tokens from scratch by personal☆788Apr 27, 2025Updated 9 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆495May 1, 2025Updated 9 months ago
- 这是一个open-r1的复现项目,对0.5B、1.5B、3B、7B的qwen模型进行GRPO训练,观察到一些有趣的现象。☆56Apr 13, 2025Updated 10 months ago
- 本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)☆23,158Dec 30, 2025Updated last month
- Train deepseek r1-like reasoning LLM with ease | 轻松训练1个deepseek r1类的推理LLM☆18Feb 15, 2025Updated last year
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆79Sep 6, 2024Updated last year
- 🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!☆39,326Feb 6, 2026Updated last week
- 从零实现一个小参数量中文大语言模型。☆946Aug 22, 2024Updated last year
- Reproduce R1 Zero on Logic Puzzle☆2,435Mar 20, 2025Updated 10 months ago
- 欢迎来到 LLM-Dojo,这里是一个开源大模型学习场所,使用简洁且易阅读的代码构建模型训练框架(支持各种主流模型如Qwen、Llama、GLM等等)、RLHF框架(DPO/CPO/KTO/PPO)等各种功能。👩🎓👨🎓☆927Dec 1, 2025Updated 2 months ago
- minimal-cost for training 0.5B R1-Zero☆809May 14, 2025Updated 9 months ago
- 主要记录大语言大模型(LLMs) 算法(应用)工程师相关的知识及面试题☆12,379Apr 30, 2025Updated 9 months ago
- ☆32Jul 8, 2025Updated 7 months ago
- 从0到1构建一个MiniLLM (pretrain+sft+dpo实践中)☆527Mar 23, 2025Updated 10 months ago
- 复现大模型相关算法及一些学 习记录☆2,975Feb 10, 2026Updated last week
- 本项目对Deepseek-R1-Distill-Qwen-7B进行心理咨询CoT数据的LoRA微调,以进一步提升Deepseek-R1-Distill-Qwen-7B在心理咨询领域的慢思考能力。☆12Mar 11, 2025Updated 11 months ago
- 用于从头预训练+SFT一个小参数量的中文LLaMa2的仓库;24G单卡即可运行得到一个具备简单中文问答能力的chat-llama2.☆2,888May 21, 2024Updated last year
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆67,253Updated this week
- 《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程☆28,222Feb 10, 2026Updated last week
- 🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!☆6,391Feb 4, 2026Updated last week
- 中文对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。☆1,672Apr 20, 2024Updated last year
- 《大模型白盒子构建指南》:一个全手搓的Tiny-Universe☆4,487Updated this week
- MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO、GRPO。☆4,761Feb 10, 2026Updated last week
- 从零构建大模型:从预训练到RLHF的完整实践☆2,392Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,246Updated this week
- Fully open reproduction of DeepSeek-R1☆25,879Nov 24, 2025Updated 2 months ago
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆585Jul 11, 2024Updated last year
- A travel agent based on Qwen2.5, fine-tuned by SFT + DPO/PPO/GRPO using traveling question-answer dataset, a mindmap can be output using …☆56Nov 14, 2025Updated 3 months ago
- pretrain a wiki llm using transformers☆61Sep 1, 2024Updated last year
- 整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。☆22,214May 19, 2025Updated 8 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆8,989Feb 6, 2026Updated last week
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3, Qwen3-MoE, DeepSeek-R1, GLM4.5, InternLM3, Llama4, ...) and 300+ MLLMs (…☆12,670Updated this week
- Building DeepSeek R1 from Scratch☆745Mar 21, 2025Updated 10 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,599Feb 10, 2026Updated last week
- 超简单复现Deepseek-R1-Zero和Deepseek-R1,以「24点游戏」为例。通过zero-RL、SFT以及SFT+RL,以激发LLM的自主验证反思能力。 About Clean, minimal, accessible reproduction of Dee…☆33Apr 5, 2025Updated 10 months ago
- 每个人都能看懂的大模型知识分享,LLMs春/秋招大模型面试前必看,让你和面试官侃侃而谈☆5,501Feb 5, 2026Updated last week
- DeepSeek 系列工作解读、扩展和复现。☆700Mar 29, 2025Updated 10 months ago
- ☆57Jul 8, 2025Updated 7 months ago
- 本项目利用医学领域的 CoT 数据对 Deepseek-R1-Distill-Qwen-7B 进行微调,通过 QLoRA 量化和 Unsloth 加速训练,显著提升模型在复杂医学推理任务中的慢思考能力。知识蒸馏技术使轻量级模型获得大模型的推理优势,实现高效、准确且具有解释性…☆40Mar 10, 2025Updated 11 months ago