yannqi / R-4BLinks
The official repository of "R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Integration"
☆136Updated 5 months ago
Alternatives and similar repositories for R-4B
Users that are interested in R-4B are comparing it to the libraries listed below
Sorting:
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆129Updated 6 months ago
- [MTI-LLM@NeurIPS 2025] Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆147Updated 6 months ago
- Reinforcement Learning of Vision Language Models with Self Visual Perception Reward☆159Updated 4 months ago
- OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe☆141Updated last month
- [ICLR 2026] Geometric-Mean Policy Optimization☆99Updated last week
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆91Updated 5 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆179Updated 7 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 8 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆73Updated last year
- [EMNLP 2025] Distill Visual Chart Reasoning Ability from LLMs to MLLMs☆59Updated 5 months ago
- ☆107Updated 7 months ago
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆50Updated 8 months ago
- The code and data of We-Math 2.0.☆164Updated 5 months ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆71Updated 2 months ago
- ☆110Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆63Updated last year
- (ICLR 2026) An official implementation of "CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement Learning"☆182Updated last week
- ☆68Updated 4 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆212Updated last year
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆149Updated 3 months ago
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆37Updated last year
- Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks☆35Updated 2 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- Official codes of "Monet: Reasoning in Latent Visual Space Beyond Image and Language"☆123Updated last month
- [NAACL 2025 Oral] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆54Updated 9 months ago
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆76Updated 4 months ago
- ☆122Updated 4 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 7 months ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆191Updated 10 months ago