facebookresearch / minimal_video_pairsLinks
A Shortcut-aware Video-QA Benchmark for Physical Understanding via Minimal Video Pairs
☆33Updated 2 months ago
Alternatives and similar repositories for minimal_video_pairs
Users that are interested in minimal_video_pairs are comparing it to the libraries listed below
Sorting:
- This repo contains the code for the paper "Intuitive physics understanding emerges fromself-supervised pretraining on natural videos"☆196Updated 9 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆219Updated last month
- This is the code repository for IntPhys 2, a video benchmark designed to evaluate the intuitive physics understanding of deep learning mo…☆84Updated last month
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆145Updated 8 months ago
- We introduce CausalVQA, a benchmark dataset for video question answering (VQA) composed of question-answer pairs that probe models’ under…☆45Updated 3 months ago
- An open source implementation of CLIP (With TULIP Support)☆163Updated 6 months ago
- ☆104Updated 4 months ago
- Cambrian-S: Towards Spatial Supersensing in Video☆375Updated last week
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆191Updated 3 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆346Updated 5 months ago
- ☆76Updated 6 months ago
- Visual Planning: Let's Think Only with Images☆280Updated 6 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆330Updated last year
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆189Updated 6 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆119Updated 3 months ago
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆49Updated 9 months ago
- ☆189Updated last year
- TStar is a unified temporal search framework for long-form video question answering☆71Updated 2 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆71Updated last year
- Official Implementation for our NeurIPS 2024 paper, "Don't Look Twice: Run-Length Tokenization for Faster Video Transformers".☆228Updated 7 months ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆283Updated last year
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistant☆345Updated 8 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆229Updated 2 weeks ago
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"☆107Updated 2 months ago
- Code for LifelongMemory: Leveraging LLMs for Answering Queries in Long-form Egocentric Videos☆27Updated 3 weeks ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆149Updated last month
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆404Updated 2 months ago
- A Curated List of Awesome Works in World Modeling, Aiming to Serve as a One-stop Resource for Researchers, Practitioners, and Enthusiasts…☆782Updated last week
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆227Updated last month
- ☆61Updated 2 months ago