这是一个从头训练大语言模型的项目,包括预训练、微调和直接偏好优化,模型拥有1B参数,支持中英文。
☆807Feb 18, 2025Updated last year
Alternatives and similar repositories for mini_qwen
Users that are interested in mini_qwen are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Train a 1B LLM with 1T tokens from scratch by personal☆792Apr 27, 2025Updated 11 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆496May 1, 2025Updated 10 months ago
- 这是一个open-r1的复现项目,对0.5B、1.5B、3B、7B的qwen模型进行GRPO训练,观察到一些有趣的现象。☆57Apr 13, 2025Updated 11 months ago
- 本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)☆23,762Mar 12, 2026Updated 2 weeks ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆81Sep 6, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- 🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!☆41,885Feb 6, 2026Updated last month
- 从零实现一个小参数量中文大语言模型。☆986Aug 22, 2024Updated last year
- Reproduce R1 Zero on Logic Puzzle☆2,442Mar 20, 2025Updated last year
- minimal-cost for training 0.5B R1-Zero☆813May 14, 2025Updated 10 months ago
- 轻量级 LLM Post-training 框架,支持 SFT、RLVR、On-Policy KD、Guide KD 及混合训练;实现单轮/多轮 Guide 蒸馏、多教师蒸馏、Reward 混合训练与自动化数据分流👩🎓👨🎓☆932Mar 8, 2026Updated 3 weeks ago
- 本项目对Deepseek-R1-Distill-Qwen-7B进行心理咨询CoT数据的LoRA微调,以进一步提升Deepseek-R1-Distill-Qwen-7B在心理咨询领域的慢思考能力。☆12Mar 11, 2025Updated last year
- 从零构建大模型:从预训练到RLHF的完整实践☆2,553Mar 19, 2026Updated last week
- 主要记录大语言大模型(LLMs) 算法(应用)工程师相关的知识及面试题☆13,487Apr 30, 2025Updated 11 months ago
- 复现大模型相关算法及一些学习记录☆3,153Mar 21, 2026Updated last week
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆69,106Updated this week
- 用于从头预训练+SFT一个小参数量的中文LLaMa2的仓库;24G单卡即可运行得到一个具备简单中文问答能力的chat-llama2.☆2,903May 21, 2024Updated last year
- Train deepseek r1-like reasoning LLM with ease | 轻松训练1个deepseek r1类的推理LLM☆19Feb 15, 2025Updated last year
- 中文对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。☆1,687Apr 20, 2024Updated last year
- 《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程☆29,261Mar 22, 2026Updated last week
- 从0到1构建一个MiniLLM (pretrain+sft+dpo实践中)☆544Mar 23, 2025Updated last year
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,286Updated this week
- 🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!☆7,008Feb 4, 2026Updated last month
- MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO、GRPO。☆5,123Updated this week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Fully open reproduction of DeepSeek-R1☆25,968Nov 24, 2025Updated 4 months ago
- 《大模型白盒子构建指南》:一个全手搓的Tiny-Universe☆4,653Feb 12, 2026Updated last month
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,231Updated this week
- 整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。☆22,469May 19, 2025Updated 10 months ago
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, …☆13,391Updated this week
- This repository is aim to reproduce the R1-Zero on medical domain.☆32Jun 11, 2025Updated 9 months ago
- Building a VLM model starts from the basic module.☆18Apr 7, 2024Updated last year
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆587Jul 11, 2024Updated last year
- 本项目利用医学领域的 CoT 数据对 Deepseek-R1-Distill-Qwen-7B 进行微调,通过 QLoRA 量化和 Unsloth 加速训练,显著提升模型在复杂医学推理任务中的慢思考能力。知识蒸馏技术使轻量级模型获得大模型的推理优势, 实现高效、准确且具有解释性…☆41Mar 10, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Building DeepSeek R1 from Scratch☆751Mar 21, 2025Updated last year
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,776Updated this week
- something for paper agent☆11Dec 18, 2024Updated last year
- A very simple GRPO implement for reproducing r1-like LLM thinking.☆1,621Nov 21, 2025Updated 4 months ago
- ☆143Sep 29, 2024Updated last year
- ☆33Jul 8, 2025Updated 8 months ago
- Train a Language Model with GRPO to create a schedule from a list of events and priorities☆266Apr 29, 2025Updated 11 months ago