abwilf / Social-IQ-2.0-ChallengeLinks
The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23
☆35Updated 2 years ago
Alternatives and similar repositories for Social-IQ-2.0-Challenge
Users that are interested in Social-IQ-2.0-Challenge are comparing it to the libraries listed below
Sorting:
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated 2 years ago
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆157Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆195Updated 2 years ago
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆34Updated last year
- ☆101Updated 3 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆116Updated 3 years ago
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆146Updated 3 years ago
- Official repository for the A-OKVQA dataset☆109Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆89Updated last year
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆50Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆139Updated 2 years ago
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆77Updated 10 months ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆132Updated 2 years ago
- ☆67Updated 2 years ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆183Updated 5 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆156Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆90Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆106Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆107Updated last year
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆58Updated 5 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆127Updated 9 months ago
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆173Updated 2 years ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆70Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆102Updated last year
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆54Updated 2 years ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)☆87Updated 3 years ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆124Updated 2 years ago
- ☆108Updated last year