Callione / LLaVA-MOSS2Links
Modified LLaVA framework for MOSS2, and makes MOSS2 a multimodal model.
☆13Updated 11 months ago
Alternatives and similar repositories for LLaVA-MOSS2
Users that are interested in LLaVA-MOSS2 are comparing it to the libraries listed below
Sorting:
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆52Updated last month
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆65Updated 5 months ago
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Rea…☆95Updated 2 months ago
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆53Updated 3 months ago
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆65Updated last year
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆163Updated 5 months ago
- ☆55Updated last year
- Synth-Empathy: Towards High-Quality Synthetic Empathy Data☆15Updated 6 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆114Updated last year
- The Next Step Forward in Multimodal LLM Alignment☆176Updated 4 months ago
- HumanOmni☆193Updated 5 months ago
- ☆104Updated last month
- [NAACL 2024] LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-text Generation?☆42Updated last year
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆136Updated last month
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆392Updated last week
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆38Updated 2 months ago
- Code2Logic: Game-Code-Driven Data Synthesis for Enhancing VLMs General Reasoning☆72Updated last week
- Official repository of MMDU dataset☆93Updated 11 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆85Updated 3 weeks ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆267Updated 7 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆125Updated last month
- ☆21Updated 7 months ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆62Updated 3 months ago
- 对llava官方代码的一些学习笔记☆29Updated 10 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆152Updated 5 months ago
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆34Updated 5 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆448Updated 7 months ago
- ☆67Updated last month
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆83Updated 7 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆113Updated 2 months ago