Haochen-Wang409 / TreeVGRLinks
Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"
☆63Updated 2 months ago
Alternatives and similar repositories for TreeVGR
Users that are interested in TreeVGR are comparing it to the libraries listed below
Sorting:
- Code for paper: Reinforced Vision Perception with Tools☆28Updated this week
- ☆45Updated 8 months ago
- [ICCV 2025] Dynamic-VLM☆25Updated 8 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 3 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆76Updated last month
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆32Updated last month
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆86Updated last year
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆27Updated last month
- Official implement of MIA-DPO☆65Updated 7 months ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆84Updated 2 weeks ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆87Updated last month
- ☆88Updated 8 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆126Updated last month
- ☆88Updated 2 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆79Updated last year
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆72Updated 3 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 10 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆57Updated 9 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆40Updated 4 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆58Updated last month
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆76Updated last month
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 10 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 6 months ago
- Official repository of the video reasoning benchmark MMR-V. Can Your MLLMs "Think with Video"?☆36Updated 2 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆43Updated last month
- ☆114Updated 5 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆87Updated 3 months ago
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆186Updated this week
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated 10 months ago