liveseongho / Awesome-Video-Language-Understanding
A Survey on video and language understanding.
☆48Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Awesome-Video-Language-Understanding
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆22Updated 4 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆82Updated 4 months ago
- Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆39Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆89Updated last week
- ☆121Updated 3 weeks ago
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆96Updated 4 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆52Updated 2 months ago
- A PyTorch implementation of EmpiricalMVM☆39Updated 11 months ago
- ☆120Updated last month
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated 3 months ago
- ☆72Updated 6 months ago
- ☆102Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆130Updated last year
- ☆54Updated 4 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆178Updated 10 months ago
- ☆85Updated 11 months ago
- ☆131Updated 11 months ago
- Language Repository for Long Video Understanding☆28Updated 5 months ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆35Updated 3 months ago
- ☆72Updated 11 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆140Updated last month
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆125Updated 2 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated 11 months ago
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆73Updated 3 months ago
- ☆64Updated 4 months ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆95Updated last year
- Foundation Models for Video Understanding: A Survey☆97Updated 2 months ago
- Official PyTorch code of "Grounded Question-Answering in Long Egocentric Videos", accepted by CVPR 2024.☆51Updated 2 months ago
- ☆58Updated 9 months ago