zai-org / MotionBenchLinks
Official code for MotionBench (CVPR 2025)
☆63Updated 10 months ago
Alternatives and similar repositories for MotionBench
Users that are interested in MotionBench are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Narrative Action Evaluation with Prompt-Guided Multimodal Interaction☆40Updated last year
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆136Updated 4 months ago
- The official code of "Thinking With Videos: Multimodal Tool-Augmented Reinforcement Learning for Long Video Reasoning"☆77Updated 3 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆113Updated 5 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆96Updated 10 months ago
- TStar is a unified temporal search framework for long-form video question answering☆84Updated 4 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆94Updated 10 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆85Updated 6 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆137Updated 7 months ago
- [ICCV 2025 Oral] Official implementation of Learning Streaming Video Representation via Multitask Training.☆77Updated 3 weeks ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆143Updated last year
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆132Updated 5 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆233Updated 4 months ago
- [NeurIPS 2025] VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Models☆154Updated last week
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆88Updated last year
- Incentivizing "Thinking with Long Videos" via Native Tool Calling☆172Updated this week
- ☆25Updated 2 months ago
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 10 months ago
- ☆96Updated 6 months ago
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Model☆80Updated last month
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆102Updated 6 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆43Updated 6 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆194Updated 3 weeks ago
- Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models [CVPR 2025]☆76Updated 6 months ago
- ☆21Updated last month
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generation☆185Updated 7 months ago
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆139Updated 4 months ago
- MotionSight's official code implementation.☆44Updated 3 months ago
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆75Updated last year
- Does Understanding Inform Generation in Unified Multimodal Models? From Analysis to Path Forward☆59Updated last month