zhangquanchen / VisRLLinks
VisRL: Intention-Driven Visual Perception via Reinforced Reasoning
☆29Updated last week
Alternatives and similar repositories for VisRL
Users that are interested in VisRL are comparing it to the libraries listed below
Sorting:
- ☆37Updated 11 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆78Updated 9 months ago
- Official implement of MIA-DPO☆58Updated 5 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆40Updated 2 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆42Updated 2 weeks ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated last year
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆51Updated this week
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆35Updated 11 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆65Updated 11 months ago
- ☆33Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆87Updated 2 weeks ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆39Updated 3 months ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆49Updated last month
- ☆24Updated 4 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆76Updated last year
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆34Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆61Updated 2 weeks ago
- Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆29Updated last month
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 3 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 8 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆53Updated 2 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆32Updated 2 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆85Updated 9 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆37Updated 5 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 11 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 3 months ago
- ☆44Updated 5 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆73Updated 2 months ago
- Official Repository of Personalized Visual Instruct Tuning☆29Updated 3 months ago
- Official Repository: A Comprehensive Benchmark for Logical Reasoning in MLLMs☆37Updated last week