kesenzhao / UV-CoTLinks
☆41Updated 5 months ago
Alternatives and similar repositories for UV-CoT
Users that are interested in UV-CoT are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆78Updated 3 months ago
- SFT+RL boosts multimodal reasoning☆42Updated 6 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆43Updated 6 months ago
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆44Updated last year
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 9 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆172Updated this week
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆137Updated 5 months ago
- ☆74Updated 7 months ago
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆42Updated 2 months ago
- Official implement of MIA-DPO☆70Updated 11 months ago
- The official implementation of RAR☆92Updated last month
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆78Updated last month
- FreeVA: Offline MLLM as Training-Free Video Assistant☆68Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆59Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆76Updated 6 months ago
- ☆83Updated last year
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆98Updated 6 months ago
- ☆90Updated last year
- ☆61Updated 2 months ago
- ☆132Updated 9 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆93Updated last month
- ☆124Updated last year
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆98Updated last month
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆32Updated 6 months ago
- Code for paper: Reinforced Vision Perception with Tools☆68Updated 3 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Updated 7 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆41Updated 9 months ago
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"☆58Updated last month
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆62Updated last year