☆107Jun 10, 2025Updated 9 months ago
Alternatives and similar repositories for ThinkLite-VL
Users that are interested in ThinkLite-VL are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆186Jun 5, 2025Updated 9 months ago
- MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable Step-Level Supervision☆28May 26, 2025Updated 9 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆107Sep 18, 2025Updated 6 months ago
- Multimodal RewardBench☆64Feb 21, 2025Updated last year
- ☆31Feb 26, 2026Updated 3 weeks ago
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that…☆1,036Jan 26, 2026Updated last month
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆131Jul 24, 2025Updated 8 months ago
- ☆47Dec 30, 2024Updated last year
- ☆24Jun 18, 2025Updated 9 months ago
- [Blog 1] Recording a bug of grpo_trainer in some R1 projects☆22Feb 23, 2025Updated last year
- OpenThinkIMG is an end-to-end open-source framework that empowers LVLMs to think with images.☆356Jun 1, 2025Updated 9 months ago
- Official Repo for SvS: A Self-play with Variational Problem Synthesis strategy for RLVR training☆54Dec 13, 2025Updated 3 months ago
- [NeurIPS 2025] Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆287Jul 15, 2025Updated 8 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109May 27, 2025Updated 9 months ago
- v1: Learning to Point Visual Tokens for Multimodal Grounded Reasoning☆19Oct 6, 2025Updated 5 months ago
- ☆21Jul 3, 2025Updated 8 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆222Nov 27, 2025Updated 3 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆83Sep 19, 2025Updated 6 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,168Jul 15, 2025Updated 8 months ago
- ☆25Apr 9, 2025Updated 11 months ago
- Enemies for your LLM☆35Jan 20, 2026Updated 2 months ago
- ☆12Apr 18, 2025Updated 11 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆773Sep 7, 2025Updated 6 months ago
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆150Oct 10, 2025Updated 5 months ago
- Computer-Use Agents as Judges for Generative UI☆44Nov 27, 2025Updated 3 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆138Sep 11, 2025Updated 6 months ago
- ☆23Aug 20, 2024Updated last year
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆125Feb 4, 2026Updated last month
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆1,380Feb 26, 2026Updated 3 weeks ago
- [AAAI 2026] Relation-R1: Progressively Cognitive Chain-of-Thought Guided Reinforcement Learning for Unified Relation Comprehension☆18Mar 6, 2026Updated 2 weeks ago
- ☆35Aug 18, 2025Updated 7 months ago
- [TACL/EMNLP'24] Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Nov 22, 2024Updated last year
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆36Jul 15, 2025Updated 8 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆60Dec 28, 2024Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆143Mar 6, 2025Updated last year
- ☆27Jan 17, 2025Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆111Jun 11, 2025Updated 9 months ago
- ☆23Apr 24, 2025Updated 10 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆624Mar 18, 2025Updated last year