这是一个从头训练大语言模型的项目,包括预训练、微调和直接偏好优化,模型拥有1B参数,支持中英文。
☆826Feb 18, 2025Updated last year
Alternatives and similar repositories for mini_qwen
Users that are interested in mini_qwen are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Train a 1B LLM with 1T tokens from scratch by personal☆799Apr 27, 2025Updated 11 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆499May 1, 2025Updated 11 months ago
- 这是一个open-r1的复现项目,对0.5B、1.5B、3B、7B的qwen模型进行GRPO训练,观察到一些有趣的现象。☆61Apr 13, 2025Updated last year
- 本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)☆24,035Mar 12, 2026Updated last month
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆82Sep 6, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- 🚀🚀 「大模型」2小时完全从0训练64M的小参数GPT!🌏 Train a 64M-parameter GPT from scratch in just 2h!☆46,572Apr 10, 2026Updated last week
- 从零实现一个小参数量中文大语言模型。☆1,003Aug 22, 2024Updated last year
- Reproduce R1 Zero on Logic Puzzle☆2,445Mar 20, 2025Updated last year
- 轻量级 LLM Post-training 框架,支持 SFT、RLVR、On-Policy KD、Guide KD 及混合训练;实现单轮/多轮 Guide 蒸馏、多教师蒸馏、Reward 混合训练与自动化数据分流👩🎓👨🎓☆934Mar 8, 2026Updated last month
- minimal-cost for training 0.5B R1-Zero☆814May 14, 2025Updated 11 months ago
- 本项目对Deepseek-R1-Distill-Qwen-7B进行心理咨询CoT数据的LoRA微调,以进一步提升Deepseek-R1-Distill-Qwen-7B在心理咨询领域的慢思考能力。☆12Mar 11, 2025Updated last year
- 从零构建大模型:从预训练到RLHF的完整实践☆2,604Mar 19, 2026Updated last month
- 主要记录大语言大模型(LLMs) 算法(应用)工程师相关的知识及面试题☆13,870Apr 30, 2025Updated 11 months ago
- 复现大模型相关算法及一些学习记录☆3,259Mar 21, 2026Updated 3 weeks ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆70,203Apr 12, 2026Updated last week
- 中文对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、 数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。☆1,695Apr 20, 2024Updated 2 years ago
- 用于从头预训练+SFT一个小参数量的中文LLaMa2的仓库;24G单卡即可运行得到一个具备简单中文问答能力的chat-llama2.☆2,905May 21, 2024Updated last year
- Train deepseek r1-like reasoning LLM with ease | 轻松训练1个deepseek r1类的推理LLM☆19Feb 15, 2025Updated last year
- 《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程☆29,737Updated this week
- 从0到1构建一个MiniLLM (pretrain+sft+dpo实践中)☆546Mar 23, 2025Updated last year
- Fully open reproduction of DeepSeek-R1☆25,991Apr 2, 2026Updated 2 weeks ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,789Updated this week
- MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO、GRPO。☆5,255Updated this week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- 🚀 「大模型」1小时从0训练67M参数的视觉多模态VLM!🌏 Train a 67M-parameter VLM from scratch in just 1 hours!☆7,486Apr 4, 2026Updated 2 weeks ago
- 《大模型白盒子构建指南》:一个全手搓的Tiny-Universe☆4,727Feb 12, 2026Updated 2 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,340Updated this week
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.6, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, …☆13,783Updated this week
- 整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。☆22,525May 19, 2025Updated 11 months ago
- This repository is aim to reproduce the R1-Zero on medical domain.☆32Jun 11, 2025Updated 10 months ago
- Building a VLM model starts from the basic module.☆18Apr 7, 2024Updated 2 years ago
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆589Jul 11, 2024Updated last year
- 本项目利用医学领域的 CoT 数据对 Deepseek-R1-Distill-Qwen-7B 进行微调,通过 QLoRA 量化和 Unsloth 加速训练,显著提升模型在复杂医学推理任务中的慢思考能力。知识蒸馏技术使轻量级模型获得大模型的推理优势,实现高效、准确且具有解释性…☆43Mar 10, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Building DeepSeek R1 from Scratch☆752Mar 21, 2025Updated last year
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,860Apr 6, 2026Updated last week
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆4,494Nov 13, 2025Updated 5 months ago
- something for paper agent☆11Dec 18, 2024Updated last year
- ☆143Sep 29, 2024Updated last year
- A very simple GRPO implement for reproducing r1-like LLM thinking.☆1,644Nov 21, 2025Updated 4 months ago
- ☆34Jul 8, 2025Updated 9 months ago