linkangheng / Video-UTRLinks
[ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs
β61Updated 11 months ago
Alternatives and similar repositories for Video-UTR
Users that are interested in Video-UTR are comparing it to the libraries listed below
Sorting:
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selectionβ134Updated 6 months ago
- πΎ E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)β74Updated last year
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiencyβ60Updated 8 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Modelsβ88Updated last year
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?β120Updated 6 months ago
- R1-like Video-LLM for Temporal Groundingβ133Updated 7 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025β100Updated 10 months ago
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasonβ¦β153Updated 4 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuningβ135Updated 10 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videosβ145Updated last year
- [NeurIPS'25] Time-R1: Post-Training Large Vision Language Model for Temporal Video Groundingβ73Updated last month
- The official code of "Thinking With Videos: Multimodal Tool-Augmented Reinforcement Learning for Long Video Reasoning"β80Updated 3 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understandingβ175Updated last month
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoningβ103Updated 7 months ago
- β97Updated 7 months ago
- Collections of Papers and Projects for Multimodal Reasoning.β107Updated 9 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planningβ79Updated last year
- [ICLR'25] Streaming Video Question-Answering with In-context Video KV-Cache Retrievalβ99Updated 3 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedbackβ76Updated last year
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Abilityβ105Updated last year
- [CVPR 2024] Narrative Action Evaluation with Prompt-Guided Multimodal Interactionβ42Updated last year
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Mindsβ96Updated last year
- [Blog 1] Recording a bug of grpo_trainer in some R1 projectsβ22Updated 11 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understandingβ81Updated 7 months ago
- β138Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Modelsβ77Updated last year
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuningβ256Updated 3 months ago
- β132Updated 10 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Mangaβ144Updated 3 weeks ago
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoningβ44Updated 3 months ago