JoseponLee / IntentQALinks
Official repository for "IntentQA: Context-aware Video Intent Reasoning" from ICCV 2023.
β17Updated 6 months ago
Alternatives and similar repositories for IntentQA
Users that are interested in IntentQA are comparing it to the libraries listed below
Sorting:
- πΎ E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)β59Updated 5 months ago
- Official PyTorch code of GroundVQA (CVPR'24)β61Updated 9 months ago
- β75Updated 7 months ago
- Egocentric Video Understanding Dataset (EVUD)β29Updated 11 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistantβ60Updated last year
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignmentβ31Updated 9 months ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)β74Updated 11 months ago
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"β34Updated last year
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Modelsβ34Updated 7 months ago
- γNeurIPS 2024γThe official code of paper "Automated Multi-level Preference for MLLMs"β19Updated 9 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selectionβ87Updated 3 weeks ago
- Code and Dataset for the CVPRW Paper "Where did I leave my keys? β Episodic-Memory-Based Question Answering on Egocentric Videos"β27Updated last year
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiencyβ42Updated 3 weeks ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)β32Updated 2 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrievalβ38Updated 2 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Mindsβ94Updated 11 months ago
- β93Updated 5 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]β99Updated 11 months ago
- VisualGPTScore for visio-linguistic reasoningβ27Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"β95Updated 7 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuningβ33Updated 2 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedbackβ65Updated 9 months ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videosβ43Updated last year
- The official repository for ACL2025 paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".β46Updated last month
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"β119Updated 3 months ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlightβ37Updated 2 years ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioningβ77Updated 8 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Groundingβ55Updated 11 months ago
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Modelsβ116Updated 3 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, β¦β117Updated 2 months ago