jungokasai / THumB
☆15Updated 2 years ago
Alternatives and similar repositories for THumB:
Users that are interested in THumB are comparing it to the libraries listed below
- Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).☆28Updated 3 years ago
- [ACL 2024] FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model☆13Updated 5 months ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- ☆24Updated 3 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated 2 years ago
- ☆18Updated 8 months ago
- Source code and data for Things not Written in Text: Exploring Spatial Commonsense from Visual Signals (ACL2022 main conference paper).☆19Updated 2 years ago
- ☆44Updated 2 years ago
- Code and data for ImageCoDe, a contextual vison-and-language benchmark☆39Updated 11 months ago
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆41Updated 2 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-La…☆114Updated 2 years ago
- ☆40Updated 2 years ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- ☆15Updated 2 years ago
- Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO☆51Updated 4 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- Code, data, models for the Sherlock corpus☆55Updated 2 years ago
- Code for ViLBERTScore in EMNLP Eval4NLP☆18Updated 2 years ago
- This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?☆34Updated last year
- This code repository is for the accepted ACL2022 paper "On Vision Features in Multimodal Machine Translation". We provide the details and…☆44Updated 2 years ago
- ☆13Updated 3 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- Implementation for the paper "Unified Multimodal Model with Unlikelihood Training for Visual Dialog"☆13Updated last year
- Dataset and Source code for EMNLP 2019 paper "What You See is What You Get: Visual Pronoun Coreference Resolution in Dialogues"☆25Updated 3 years ago
- ☆116Updated 2 years ago
- Implementation of "Visualize Before You Write: Imagination-Guided Open-Ended Text Generation".☆17Updated 2 years ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆31Updated 2 months ago
- ☆16Updated 2 years ago