zhangquanchen / VisRLLinks
[ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning
☆34Updated last month
Alternatives and similar repositories for VisRL
Users that are interested in VisRL are comparing it to the libraries listed below
Sorting:
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆108Updated last week
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆47Updated 2 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆71Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆63Updated 3 weeks ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆56Updated 2 months ago
- ☆26Updated 5 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 10 months ago
- Official implement of MIA-DPO☆63Updated 6 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆36Updated 4 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆115Updated this week
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆124Updated 2 weeks ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆85Updated 10 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆186Updated 3 weeks ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 3 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆27Updated this week
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆132Updated 9 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated last month
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆66Updated 3 weeks ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆35Updated last year
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆83Updated 2 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆46Updated 3 weeks ago
- ☆25Updated last year
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆87Updated 3 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆124Updated last month
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆84Updated 2 months ago
- ☆85Updated 7 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆86Updated 11 months ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆98Updated 9 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆36Updated 4 months ago