AdaCheng / EgoThink
[CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Language Models"
☆48Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for EgoThink
- ☆61Updated last month
- Egocentric Video Understanding Dataset (EVUD)☆24Updated 4 months ago
- ☆41Updated 7 months ago
- Official implementation for CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆42Updated last year
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"☆29Updated this week
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆117Updated last year
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆34Updated 2 weeks ago
- [CVPR2024] This is the official implement of MP5☆84Updated 4 months ago
- ☆25Updated last year
- Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"☆23Updated last year
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆75Updated 2 months ago
- ☆121Updated 3 weeks ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆36Updated last year
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆102Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆30Updated last month
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆89Updated last week
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated 7 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆99Updated 8 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆89Updated last month
- Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆40Updated 4 months ago
- ☆104Updated last year
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆61Updated 5 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆47Updated last year
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆52Updated this week
- ☕️ CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆27Updated 5 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆122Updated 3 weeks ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆53Updated last month
- Language Repository for Long Video Understanding☆28Updated 5 months ago
- ☆36Updated last month
- VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆82Updated 4 months ago