abwilf / Social-IQ-2.0-ChallengeLinks
The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23
☆31Updated last year
Alternatives and similar repositories for Social-IQ-2.0-Challenge
Users that are interested in Social-IQ-2.0-Challenge are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆186Updated last year
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated 2 years ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆126Updated last year
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆157Updated 7 months ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆142Updated 3 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆82Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆32Updated 9 months ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆160Updated 11 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆30Updated 4 months ago
- ☆92Updated 2 years ago
- Official repository for the A-OKVQA dataset☆95Updated last year
- ☆13Updated last year
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆279Updated 2 years ago
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆51Updated last year
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆127Updated 2 years ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆99Updated last year
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆74Updated 3 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆52Updated last year
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆165Updated last year
- ☆152Updated 8 months ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆102Updated 5 months ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆125Updated 2 years ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆146Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆97Updated 8 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆118Updated 3 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆86Updated last year
- ☆92Updated 6 months ago
- ☆25Updated 2 months ago