abwilf / Social-IQ-2.0-Challenge
The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23
☆22Updated last year
Alternatives and similar repositories for Social-IQ-2.0-Challenge:
Users that are interested in Social-IQ-2.0-Challenge are comparing it to the libraries listed below
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆138Updated 2 years ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated last year
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆51Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆113Updated 2 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- ☆12Updated 6 months ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆45Updated 7 months ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 2 years ago
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆32Updated last year
- Official repository for the A-OKVQA dataset☆69Updated 8 months ago
- Language Quantized AutoEncoders☆95Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated last year
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆121Updated last year
- ☆40Updated 2 years ago
- ☆28Updated 2 months ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated last year
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆45Updated 11 months ago
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆155Updated last month
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆86Updated last year
- Code, data, models for the Sherlock corpus☆55Updated 2 years ago
- ☆68Updated last year
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆108Updated last year
- VisualCOMET: Reasoning about the Dynamic Context of a Still Image☆85Updated last year
- Official codebase for ICLR oral paper Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling☆35Updated 2 years ago
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆49Updated 7 months ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated last week
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆38Updated 10 months ago
- ☆117Updated last year
- [CVPR 2022] A large-scale public benchmark dataset for video question-answering, especially about evidence and commonsense reasoning. The…☆52Updated 6 months ago