Callione / LLaVA-MOSS2Links
Modified LLaVA framework for MOSS2, and makes MOSS2 a multimodal model.
☆13Updated 8 months ago
Alternatives and similar repositories for LLaVA-MOSS2
Users that are interested in LLaVA-MOSS2 are comparing it to the libraries listed below
Sorting:
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆33Updated last week
- Official repository of MMDU dataset☆91Updated 8 months ago
- Resources and paper list for "Thinking with Images for LVLMs". This repository accompanies our survey on how LVLMs can leverage visual in…☆126Updated this week
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆97Updated 6 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆43Updated 2 months ago
- 对llava官方代码的一些学习笔记☆25Updated 7 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆102Updated 9 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆63Updated 10 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆127Updated 2 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆159Updated 2 months ago
- ☆101Updated last month
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆63Updated 2 months ago
- The Next Step Forward in Multimodal LLM Alignment☆161Updated last month
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆136Updated 3 weeks ago
- ☆74Updated last year
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆165Updated last week
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆47Updated 2 weeks ago
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated last month
- The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆74Updated 2 weeks ago
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆34Updated 2 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 8 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆268Updated 4 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆55Updated 10 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆108Updated last month
- Recent Advances on MLLM's Reasoning Ability☆24Updated last month
- The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate".☆98Updated 6 months ago
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆46Updated 2 weeks ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆174Updated last week
- ☆59Updated 2 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year