alanaai / EVUDLinks
Egocentric Video Understanding Dataset (EVUD)
β31Updated last year
Alternatives and similar repositories for EVUD
Users that are interested in EVUD are comparing it to the libraries listed below
Sorting:
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planningβ73Updated 10 months ago
- πΎ E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)β65Updated 8 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)β46Updated last year
- Can 3D Vision-Language Models Truly Understand Natural Language?β21Updated last year
- [CVPR 2024] Data and benchmark code for the EgoExoLearn datasetβ70Updated last month
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generationβ¦β38Updated 7 months ago
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decodingβ45Updated 4 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Mindsβ94Updated last year
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"β29Updated 2 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Modelsβ37Updated 11 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedbackβ73Updated last year
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"β38Updated 10 months ago
- β26Updated 6 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioningβ79Updated 11 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistantβ63Updated last year
- β45Updated 9 months ago
- Source code for the Paper "Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models"β16Updated 2 weeks ago
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videosβ26Updated 4 months ago
- TEMPURA enables video-language models to reason about causal event relationships and generate fine-grained, timestamped descriptions of uβ¦β22Updated 4 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learningβ70Updated 8 months ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024β129Updated 5 months ago
- β90Updated 3 months ago
- Language Repository for Long Video Understandingβ32Updated last year
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lanβ¦β61Updated 6 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effectβ¦β57Updated last year
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Modelsβ37Updated 2 years ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Modelsβ86Updated last year
- β138Updated last year
- β31Updated last year
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?β88Updated 2 months ago