yuezih / Movie101
Narrative movie understanding benchmark
☆63Updated 8 months ago
Alternatives and similar repositories for Movie101:
Users that are interested in Movie101 are comparing it to the libraries listed below
- [CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval☆38Updated last year
- ☆63Updated last month
- Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆82Updated last month
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆95Updated 2 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆18Updated last week
- 🌀 R^2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆71Updated 6 months ago
- Official repository of MMDU dataset☆82Updated 3 months ago
- VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆45Updated this week
- Official implement of MIA-DPO☆49Updated 2 months ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆36Updated 8 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆33Updated 2 months ago
- [Preprint] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆57Updated 2 weeks ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆48Updated 4 months ago
- Official PyTorch code of "Grounded Question-Answering in Long Egocentric Videos", accepted by CVPR 2024.☆56Updated 4 months ago
- LMM which strictly superset LLM embedded☆37Updated 2 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated 5 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆25Updated 6 months ago
- [ECCV2024] Learning Video Context as Interleaved Multimodal Sequences☆32Updated 3 months ago
- ☆72Updated 8 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆56Updated 4 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆83Updated 5 months ago
- ☆59Updated 11 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆68Updated 3 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 3 months ago
- Video dataset dedicated to portrait-mode video recognition.☆41Updated last month
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- ☆26Updated 5 months ago
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆96Updated last month