Callione / LLaVA-MOSS2Links
Modified LLaVA framework for MOSS2, and makes MOSS2 a multimodal model.
☆13Updated 9 months ago
Alternatives and similar repositories for LLaVA-MOSS2
Users that are interested in LLaVA-MOSS2 are comparing it to the libraries listed below
Sorting:
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆36Updated last week
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆69Updated last week
- The Next Step Forward in Multimodal LLM Alignment☆165Updated last month
- Synth-Empathy: Towards High-Quality Synthetic Empathy Data☆15Updated 4 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆134Updated 2 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆113Updated this week
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆55Updated 3 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆161Updated 3 months ago
- ☆44Updated 2 weeks ago
- Official repository of MMDU dataset☆92Updated 9 months ago
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆35Updated 2 months ago
- Resources and paper list for "Thinking with Images for LVLMs". This repository accompanies our survey on how LVLMs can leverage visual in…☆402Updated last week
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆61Updated last year
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆64Updated 3 months ago
- ☆75Updated last year
- Paper collections of multi-modal LLM for Math/STEM/Code.☆107Updated last week
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆66Updated 11 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆179Updated last month
- ☆39Updated 6 months ago
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆68Updated 4 months ago
- ☆26Updated 8 months ago
- ☆87Updated 3 months ago
- The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate".☆99Updated 7 months ago
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆36Updated last month
- ☆101Updated last week
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆124Updated 2 months ago
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Rea…☆84Updated this week
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆169Updated last week
- 对llava官方代码的一些学习笔记☆26Updated 8 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆150Updated last month