harsh19 / spot-the-diff
EMNLP 2018. Learning to Describe Differences Between Pairs of Similar Images. Harsh Jhamtani, Taylor Berg-Kirkpatrick.
☆63Updated 5 years ago
Alternatives and similar repositories for spot-the-diff
Users that are interested in spot-the-diff are comparing it to the libraries listed below
Sorting:
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆48Updated 2 years ago
- Data of ACL 2019 Paper "Expressing Visual Relationships via Language".☆62Updated 4 years ago
- CLIP4IDC: CLIP for Image Difference Captioning (AACL 2022)☆34Updated 2 years ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in th…☆65Updated 2 years ago
- Reliably download millions of images efficiently☆116Updated 4 years ago
- RareAct: A video dataset of unusual interactions☆32Updated 4 years ago
- kdexd/coco-caption@de6f385☆26Updated 5 years ago
- ☆43Updated 4 years ago
- [ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos☆121Updated last year
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆45Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 2 years ago
- Command-line tool for downloading and extending the RedCaps dataset.☆47Updated last year
- Starter Code for VALUE benchmark☆80Updated 2 years ago
- [CVPR 2022] The code for our paper 《Object-aware Video-language Pre-training for Retrieval》☆62Updated 2 years ago
- Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO☆51Updated 4 years ago
- Code for "On diversity in image captioning: metrics and methods".☆8Updated 4 years ago
- ☆108Updated 2 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated last year
- Dataset and starting code for visual entailment dataset☆109Updated 3 years ago
- Code for "bootstrap, review, decode: using out-of-domain textual data to improve image captioning"☆20Updated 8 years ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- An VideoQA dataset based on the videos from ActivityNet☆74Updated 4 years ago
- [EMNLP 2021] Code and data for our paper "Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers…☆20Updated 3 years ago
- Dense video captioning in PyTorch☆41Updated 5 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆134Updated 2 years ago
- Visual Question Reasoning on General Dependency Tree☆30Updated 6 years ago
- ☆32Updated 6 years ago
- [CVPR21] Visual Semantic Role Labeling for Video Understanding (https://arxiv.org/abs/2104.00990)☆60Updated 3 years ago