liveseongho / Awesome-Video-Language-UnderstandingLinks
A Survey on video and language understanding.
☆50Updated 2 years ago
Alternatives and similar repositories for Awesome-Video-Language-Understanding
Users that are interested in Awesome-Video-Language-Understanding are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆189Updated last year
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- ☆110Updated 2 years ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆106Updated 9 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆136Updated 2 years ago
- A Unified Framework for Video-Language Understanding☆60Updated 2 years ago
- ☆80Updated 11 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆198Updated last year
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆36Updated last year
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- ☆72Updated last year
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 10 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆259Updated 2 months ago
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"☆32Updated last year
- ☆155Updated last year
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆175Updated last year
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆104Updated 10 months ago
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆229Updated 2 years ago
- ☆138Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆125Updated 6 months ago
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆114Updated 10 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆130Updated 2 years ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated 2 years ago
- ☆133Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 3 years ago
- Foundation Models for Video Understanding: A Survey☆140Updated 3 months ago