XingruiWang / DynSuperCLEVRLinks
A video question answering dataset that focuses on the dynamics properties of objects (velocity, acceleration) and their collisions within 4D scenes.
☆18Updated 7 months ago
Alternatives and similar repositories for DynSuperCLEVR
Users that are interested in DynSuperCLEVR are comparing it to the libraries listed below
Sorting:
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 9 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated last year
- Program synthesis for 3D spatial reasoning☆53Updated 5 months ago
- Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"☆44Updated 2 years ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆44Updated 7 months ago
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- [ECCV 2024] OpenPSG: Open-set Panoptic Scene Graph Generation via Large Multimodal Models☆49Updated 10 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆74Updated 3 months ago
- Test-Time Training on Video Streams☆64Updated 2 years ago
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆93Updated last year
- ☆51Updated 7 months ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆29Updated 4 months ago
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆64Updated 2 years ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆52Updated last year
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆92Updated last year
- [ICLR 2025 Oral] Official Implementation for "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Un…☆18Updated last year
- Can 3D Vision-Language Models Truly Understand Natural Language?☆20Updated last year
- ☆30Updated last year
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆122Updated 4 months ago
- ☆60Updated 3 weeks ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Updated last year
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated last year
- [ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding☆77Updated 2 years ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆51Updated last year
- Official code for MotionBench (CVPR 2025)☆59Updated 8 months ago
- FleVRS: Towards Flexible Visual Relationship Segmentation, NeurIPS 2024☆22Updated 11 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆94Updated 8 months ago
- ☆29Updated 8 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆130Updated 3 months ago