owenliang / qwen2.5-0.5b-grpoLinks
Qwen2.5 0.5B GRPO
☆78Updated 11 months ago
Alternatives and similar repositories for qwen2.5-0.5b-grpo
Users that are interested in qwen2.5-0.5b-grpo are comparing it to the libraries listed below
Sorting:
- llm & rl☆271Updated 3 months ago
- ☆136Updated last year
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆79Updated last year
- 通义千问的DPO训练☆60Updated last year
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆263Updated last year
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆186Updated 2 years ago
- 一个包含了多种主流大模型微调方案的实战代码库,基于Qwen3系列模型☆113Updated 5 months ago
- LLM Tokenizer with BPE algorithm☆47Updated last year
- [COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation☆166Updated 6 months ago
- ☆129Updated last year
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆199Updated 6 months ago
- pytorch复现transformer☆92Updated 2 years ago
- 将SmolVLM2的视觉头与Qwen3-0.6B模型进行了拼接微调☆509Updated 4 months ago
- 一些大语言模型和多模态模型的生态,主要包括跨模态搜索、投机解码、QAT量化、多模态量化、ChatBot、OCR☆196Updated this week
- ☆412Updated 11 months ago
- pytorch distribute tutorials☆169Updated 7 months ago
- Build a simple basic multimodal large model from scratch. 从零搭建一个简单的基础多模态大模型🤖☆47Updated last year
- 这是一个open-r1的复现项目,对0.5B、1.5B、3B、7B的qwen模型进行GRPO训练,观察到一些有趣的现象。☆54Updated 9 months ago
- 童发发的大模型学习之旅☆135Updated 5 months ago
- This is a user guide for the MiniCPM and MiniCPM-V series of small language models (SLMs) developed by ModelBest. “面壁小钢炮” focuses on achi…☆299Updated 7 months ago
- For People! For Freedom!☆142Updated 5 months ago
- from MHA, MQA, GQA to MLA by 苏剑林, with code☆41Updated 11 months ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆130Updated last year
- ☆85Updated last year
- ThinkLLM:🚀 轻量、高效的大语言模型算法实现☆114Updated 8 months ago
- 青稞Talk☆190Updated last week
- 从0开始,将chatgpt的技术路线跑一遍。☆272Updated last year
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆147Updated 9 months ago
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆81Updated 3 months ago
- MLLM @ Game☆15Updated 8 months ago