hokindeng / VMEvalKitLinks
This is a framework for evaluating reasoning in foundational Video Models.
☆41Updated this week
Alternatives and similar repositories for VMEvalKit
Users that are interested in VMEvalKit are comparing it to the libraries listed below
Sorting:
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆143Updated last month
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆170Updated 2 weeks ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆78Updated 5 months ago
- Thinking with Videos from Open-Source Priors. We reproduce chain-of-frames visual reasoning by fine-tuning open-source video models. Give…☆185Updated last month
- ☆30Updated 11 months ago
- Official implementation of "Self-Improving Video Generation"☆75Updated 7 months ago
- This is a collection of recent papers on reasoning in video generation models.☆38Updated this week
- We introduce 'Thinking with Video', a new paradigm leveraging video generation for multimodal reasoning. Our VideoThinkBench shows that S…☆199Updated last week
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆194Updated 3 months ago
- ☆28Updated 4 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆103Updated 3 weeks ago
- ☆104Updated 4 months ago
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reason…☆144Updated 2 months ago
- ☆283Updated last month
- [CVPR2024] This is the official implement of MP5☆106Updated last year
- Official Repo of From Masks to Worlds: A Hitchhiker’s Guide to World Models.☆55Updated last month
- PhysGame Benchmark for Physical Commonsense Evaluation in Gameplay Videos☆46Updated 4 months ago
- ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO☆75Updated last week
- ☆100Updated 3 weeks ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆86Updated 5 months ago
- ☆41Updated 5 months ago
- Cambrian-S: Towards Spatial Supersensing in Video☆375Updated 2 weeks ago
- VideoNSA: Native Sparse Attention Scales Video Understanding☆61Updated 2 weeks ago
- Official code for MotionBench (CVPR 2025)☆59Updated 8 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆77Updated 4 months ago
- ☆61Updated 2 months ago
- SimWorld: An Open-ended Realistic Simulator for Autonomous Agents in Physical and Social Worlds☆75Updated 2 weeks ago
- Being-VL-0.5: Unified Multimodal Understanding via Byte-Pair Visual Encoding☆41Updated 2 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆193Updated 6 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆86Updated last month