CSfufu / Revisual-R1Links
🚀ReVisual-R1 is a 7B open-source multimodal language model that follows a three-stage curriculum—cold-start pre-training, multimodal reinforcement learning, and text-only reinforcement learning—to achieve faithful, concise, and self-reflective state-of-the-art performance in visual and textual reasoning.
☆45Updated this week
Alternatives and similar repositories for Revisual-R1
Users that are interested in Revisual-R1 are comparing it to the libraries listed below
Sorting:
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆47Updated 5 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆65Updated this week
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆86Updated 6 months ago
- ☆77Updated 4 months ago
- ☆45Updated last month
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆115Updated last month
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆103Updated 2 weeks ago
- GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning☆138Updated 2 weeks ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆105Updated 3 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆122Updated 3 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆51Updated last week
- Code release for VTW (AAAI 2025) Oral☆43Updated 4 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆63Updated 10 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆76Updated 5 months ago
- ☆41Updated this week
- A RLHF Infrastructure for Vision-Language Models☆176Updated 6 months ago
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆29Updated last month
- ☆74Updated last year
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆96Updated 10 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆40Updated 6 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆127Updated 2 months ago
- Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆49Updated last month
- [Blog 1] Recording a bug of grpo_trainer in some R1 projects☆20Updated 3 months ago
- Official Repository: A Comprehensive Benchmark for Logical Reasoning in MLLMs☆30Updated last week
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆55Updated 9 months ago
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆44Updated 2 weeks ago
- ☆25Updated last year
- R1-Vision: Let's first take a look at the image☆47Updated 3 months ago