facebookresearch / minimal_video_pairsLinks
A Shortcut-aware Video-QA Benchmark for Physical Understanding via Minimal Video Pairs
☆35Updated 2 months ago
Alternatives and similar repositories for minimal_video_pairs
Users that are interested in minimal_video_pairs are comparing it to the libraries listed below
Sorting:
- This is the code repository for IntPhys 2, a video benchmark designed to evaluate the intuitive physics understanding of deep learning mo…☆86Updated last month
- This repo contains the code for the paper "Intuitive physics understanding emerges fromself-supervised pretraining on natural videos"☆203Updated 10 months ago
- Visual Planning: Let's Think Only with Images☆285Updated 6 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆220Updated last month
- An open source implementation of CLIP (With TULIP Support)☆163Updated 7 months ago
- ☆111Updated 4 months ago
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆51Updated 10 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆80Updated 6 months ago
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆191Updated 7 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆152Updated 8 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆180Updated 6 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆207Updated 4 months ago
- We introduce CausalVQA, a benchmark dataset for video question answering (VQA) composed of question-answer pairs that probe models’ under…☆50Updated 3 months ago
- TStar is a unified temporal search framework for long-form video question answering☆76Updated 3 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆72Updated last year
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆129Updated 3 months ago
- Cambrian-S: Towards Spatial Supersensing in Video☆422Updated last month
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆333Updated last year
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆360Updated 6 months ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆244Updated 2 months ago
- ☆77Updated 7 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆123Updated 4 months ago
- ☆30Updated 3 months ago
- Official Implementation for our NeurIPS 2024 paper, "Don't Look Twice: Run-Length Tokenization for Faster Video Transformers".☆230Updated 8 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆411Updated 7 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR'25)☆46Updated 8 months ago
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistant☆357Updated 8 months ago
- ☆68Updated 3 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆152Updated 2 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆226Updated 8 months ago