yuyq96 / R1-VisionLinks
R1-Vision: Let's first take a look at the image
☆48Updated 7 months ago
Alternatives and similar repositories for R1-Vision
Users that are interested in R1-Vision are comparing it to the libraries listed below
Sorting:
- The Next Step Forward in Multimodal LLM Alignment☆181Updated 5 months ago
- Official repository of MMDU dataset☆95Updated last year
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆94Updated last month
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆73Updated last year
- A Survey on Benchmarks of Multimodal Large Language Models☆140Updated 3 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆146Updated 5 months ago
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka☆319Updated 3 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆195Updated last year
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆130Updated 2 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆192Updated last week
- A collection of visual instruction tuning datasets.☆76Updated last year
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆133Updated 7 months ago
- ☆25Updated last year
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆68Updated 6 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆295Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆273Updated last year
- A RLHF Infrastructure for Vision-Language Models☆184Updated 10 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆91Updated 2 weeks ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆252Updated 4 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆142Updated 10 months ago
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆114Updated last year
- Paper collections of multi-modal LLM for Math/STEM/Code.☆128Updated last month
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 10 months ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆85Updated last month
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆225Updated last month
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆66Updated 2 weeks ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆200Updated 6 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆93Updated last year
- Visual Instruction Tuning for Qwen2 Base Model☆38Updated last year
- ☆122Updated 6 months ago