Coobiw / MPP-LLaVALinks
Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
☆463Updated 4 months ago
Alternatives and similar repositories for MPP-LLaVA
Users that are interested in MPP-LLaVA are comparing it to the libraries listed below
Sorting:
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆540Updated 3 weeks ago
- minimal-cost for training 0.5B R1-Zero☆765Updated 2 months ago
- 欢迎来到 LLM-Dojo,这里是一个开源大模型学习场所,使用简洁且易阅读的代码构建模型训练框架(支持各种主流模型如Qwen、Llama、GLM等等)、RLHF框架(DPO/CPO/KTO/PPO)等各种功能。👩🎓👨🎓☆817Updated 3 weeks ago
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆219Updated last year
- Efficient Multimodal Large Language Models: A Survey☆362Updated 3 months ago
- ☆361Updated 5 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆173Updated last year
- 多模态 MM +Chat 合集☆273Updated 2 months ago
- ☆699Updated 3 weeks ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆122Updated 8 months ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆659Updated 3 weeks ago
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.☆279Updated 3 weeks ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆718Updated last week
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆64Updated 11 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆801Updated 2 months ago
- [ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列☆1,063Updated last year
- ☆91Updated 10 months ago
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka☆313Updated last month
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆392Updated 2 months ago
- A Framework of Small-scale Large Multimodal Models☆864Updated 3 months ago
- Train your Agent model via our easy and efficient framework☆1,317Updated last week
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆578Updated last year
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆263Updated 2 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆548Updated 3 months ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆210Updated 2 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆352Updated 5 months ago
- ☆366Updated 5 months ago
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆355Updated 3 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆707Updated 3 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆161Updated 3 weeks ago