Alpha-Innovator / DolphinLinks
(ACL-2025 main conference) Dolphin: Moving Towards Closed-loop Auto-research through Thinking, Practice, and Feedback
☆17Updated this week
Alternatives and similar repositories for Dolphin
Users that are interested in Dolphin are comparing it to the libraries listed below
Sorting:
- ☆18Updated last month
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆89Updated 2 weeks ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆37Updated 5 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- ZeroGUI: Automating Online GUI Learning at Zero Human Cost☆43Updated this week
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆35Updated 4 months ago
- ☆27Updated last month
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆43Updated 3 months ago
- Pixel-Level Reasoning Model trained with RL☆92Updated this week
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆67Updated last month
- Scaffold Prompting to promote LMMs☆42Updated 5 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆103Updated 2 weeks ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆115Updated last month
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆44Updated 3 weeks ago
- ☆105Updated 2 months ago
- ☆43Updated 5 months ago
- Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding☆72Updated this week
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆36Updated 2 weeks ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆57Updated 7 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆51Updated last week
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆60Updated this week
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆68Updated last year
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆25Updated last year
- Official Repository: A Comprehensive Benchmark for Logical Reasoning in MLLMs☆30Updated last week
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆65Updated this week
- Multimodal RewardBench☆39Updated 3 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆58Updated 2 months ago
- ☆73Updated last year
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆165Updated last week
- ☆77Updated 5 months ago