harsh19 / spot-the-diffLinks
EMNLP 2018. Learning to Describe Differences Between Pairs of Similar Images. Harsh Jhamtani, Taylor Berg-Kirkpatrick.
☆68Updated 5 years ago
Alternatives and similar repositories for spot-the-diff
Users that are interested in spot-the-diff are comparing it to the libraries listed below
Sorting:
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆50Updated 2 years ago
- This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in th…☆68Updated 3 years ago
- Data of ACL 2019 Paper "Expressing Visual Relationships via Language".☆62Updated 5 years ago
- A length-controllable and non-autoregressive image captioning model.☆68Updated 4 years ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 3 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- ☆43Updated 4 years ago
- Situation With Groundings (SWiG) dataset and Joint Situation Localizer (JSL)☆69Updated 4 years ago
- Reliably download millions of images efficiently☆118Updated 4 years ago
- kdexd/coco-caption@de6f385☆26Updated 5 years ago
- CLIP4IDC: CLIP for Image Difference Captioning (AACL 2022)☆36Updated 3 years ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.☆32Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 3 years ago
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"☆55Updated 3 years ago
- Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO☆54Updated 5 years ago
- Using LLMs and pre-trained caption models for super-human performance on image captioning.☆42Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 3 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning☆92Updated last year
- python codes for CIDEr - Consensus-based Image Caption Evaluation☆98Updated 8 years ago
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆45Updated 3 years ago
- [ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos☆124Updated 2 years ago
- ☆33Updated 7 years ago
- Official repository of ICCV 2021 - Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Models☆125Updated last month
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- An VideoQA dataset based on the videos from ActivityNet☆87Updated 5 years ago
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Updated 2 years ago
- Dataset and starting code for visual entailment dataset☆118Updated 3 years ago
- The download link for the dataset LAD.☆41Updated 6 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Updated 3 years ago