Nicous20 / FunQA
FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, and beyond.
☆101Updated 4 months ago
Alternatives and similar repositories for FunQA:
Users that are interested in FunQA are comparing it to the libraries listed below
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆111Updated last month
- ☆72Updated 11 months ago
- Code and Dataset for the CVPRW Paper "Where did I leave my keys? — Episodic-Memory-Based Question Answering on Egocentric Videos"☆25Updated last year
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆75Updated 3 weeks ago
- ☆133Updated 7 months ago
- ☆144Updated 6 months ago
- ☆71Updated 5 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆92Updated 10 months ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆58Updated 3 months ago
- Official repo for StableLLAVA☆95Updated last year
- ☆89Updated 4 months ago
- ☆133Updated last year
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆67Updated 2 months ago
- ☆97Updated 11 months ago
- ☆57Updated last year
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆99Updated 2 weeks ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆28Updated last month
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆95Updated 9 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆187Updated last year
- A PyTorch implementation of EmpiricalMVM☆40Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated 9 months ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 4 months ago
- ☆108Updated 2 years ago
- Official PyTorch code of GroundVQA (CVPR'24)☆60Updated 7 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆124Updated 10 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆50Updated last year
- A Survey on video and language understanding.☆48Updated 2 years ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆100Updated 3 months ago
- Official repository for "IntentQA: Context-aware Video Intent Reasoning" from ICCV 2023.☆16Updated 5 months ago