UCSC-VLAA / VLAA-ThinkingLinks
SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models
☆136Updated 2 weeks ago
Alternatives and similar repositories for VLAA-Thinking
Users that are interested in VLAA-Thinking are comparing it to the libraries listed below
Sorting:
- ☆95Updated 9 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆93Updated last month
- A RLHF Infrastructure for Vision-Language Models☆184Updated 11 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆157Updated 4 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆95Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆84Updated 9 months ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆90Updated 2 months ago
- ☆100Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆131Updated 2 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated 11 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆77Updated last month
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆114Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated last year
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆56Updated 2 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 8 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 4 months ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆75Updated last year
- ☆45Updated 9 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆59Updated 11 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆47Updated 11 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆82Updated 11 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆115Updated 4 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 11 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 8 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆134Updated this week
- Paper collections of multi-modal LLM for Math/STEM/Code.☆128Updated 2 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆138Updated last year