ajd12342 / why-winoground-hardView external linksLinks
Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022
☆31May 29, 2023Updated 2 years ago
Alternatives and similar repositories for why-winoground-hard
Users that are interested in why-winoground-hard are comparing it to the libraries listed below
Sorting:
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Apr 27, 2023Updated 2 years ago
- Data repository for the VALSE benchmark.☆37Feb 15, 2024Updated 2 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Nov 23, 2024Updated last year
- ☆10Jul 5, 2024Updated last year
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆292Jun 7, 2023Updated 2 years ago
- The SVO-Probes Dataset for Verb Understanding☆31Jan 28, 2022Updated 4 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆89Feb 13, 2024Updated 2 years ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆19Jun 27, 2024Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆55Apr 7, 2025Updated 10 months ago
- An Examination of the Compositionality of Large Generative Vision-Language Models☆19Apr 9, 2024Updated last year
- Create generated datasets and train robust classifiers☆36Sep 1, 2023Updated 2 years ago
- Code for Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training Model☆13Feb 15, 2024Updated 2 years ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆48Sep 25, 2023Updated 2 years ago
- Toolkit for Elevater Benchmark☆76Oct 17, 2023Updated 2 years ago
- Official repo for the TMLR paper "Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners"☆30Apr 27, 2024Updated last year
- Collaborative retina modelling across datasets and species.☆16Updated this week
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆15Jun 3, 2025Updated 8 months ago
- ☆13Jul 2, 2025Updated 7 months ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- Training code for CLIP-FlanT5☆30Jul 29, 2024Updated last year
- ☆27Aug 28, 2023Updated 2 years ago
- VisualGPTScore for visio-linguistic reasoning☆27Oct 7, 2023Updated 2 years ago
- Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation☆33Jun 30, 2025Updated 7 months ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆37Aug 18, 2024Updated last year
- Code to reproduce the experiments in the paper: Does CLIP Bind Concepts? Probing Compositionality in Large Image Models.☆16Oct 14, 2023Updated 2 years ago
- This is the repository for "SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Recognition"☆16Oct 8, 2024Updated last year
- Retrieval-augmented Image Captioning☆13Feb 16, 2023Updated 2 years ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- This repository houses the code for the paper - "The Neglected of VLMs"☆30Dec 31, 2025Updated last month
- [EMNLP 2024] IFCap: Image-like Retrieval and Frequency-based Entity Filtering for Zero-shot Captioning☆15May 13, 2025Updated 9 months ago
- [ICCV'23 Oral] The introduction and toolkit for EqBen Benchmark☆124Dec 11, 2023Updated 2 years ago
- ☆37Oct 7, 2023Updated 2 years ago
- [ACL 2023] PyTorch Implementation of Zero-and Few-Shot Event Detection via Prompt-Based Meta Learning☆16Jun 6, 2023Updated 2 years ago
- Enhancing Multimodal Compositional Reasoning of Visual Language Models with Generative Negative Mining, WACV 2024☆14Jan 3, 2024Updated 2 years ago
- Unit Scaling demo and experimentation code☆16Mar 12, 2024Updated last year
- Codebase for VidHal: Benchmarking Hallucinations in Vision LLMs☆14Apr 19, 2025Updated 9 months ago
- ☆16May 23, 2023Updated 2 years ago
- ☆17Dec 13, 2023Updated 2 years ago
- ☆20Apr 23, 2024Updated last year