marthaflinderslewis / clip-bindingView external linksLinks
Code to reproduce the experiments in the paper: Does CLIP Bind Concepts? Probing Compositionality in Large Image Models.
☆16Oct 14, 2023Updated 2 years ago
Alternatives and similar repositories for clip-binding
Users that are interested in clip-binding are comparing it to the libraries listed below
Sorting:
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- Follow-Up Differential Descriptions: Language Models Resolve Ambiguities for Image Classification☆11Nov 15, 2023Updated 2 years ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- Extended Few-Shot Learning: Exploiting Existing Resources for Novel Tasks☆11Jul 6, 2021Updated 4 years ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Nov 29, 2023Updated 2 years ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- Official Implementation of K-Paths: Reasoning over Graph Paths for Drug Repurposing and Drug Interaction Prediction.☆18Jul 8, 2025Updated 7 months ago
- A weak supervision framework for (partial) labeling functions☆16Jul 15, 2024Updated last year
- ☆18May 19, 2023Updated 2 years ago
- ☆17Feb 26, 2024Updated last year
- Generate synthetic labeled data for extremely low-resource languages using bilingual lexicons.☆18Oct 3, 2024Updated last year
- Code for "Preference Tuning For Toxicity Mitigation Generalizes Across Languages." Paper accepted at Findings of EMNLP 2024☆18Mar 25, 2025Updated 10 months ago
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆19Feb 14, 2025Updated last year
- [NeurIPS 2023] Bootstrapping Vision-Language Learning with Decoupled Language Pre-training☆26Dec 5, 2023Updated 2 years ago
- ☆21Apr 10, 2023Updated 2 years ago
- ☆24Oct 9, 2023Updated 2 years ago
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆33Mar 15, 2024Updated last year
- A Flexible Toolkit for Dense Retrieval☆43Nov 12, 2025Updated 3 months ago
- The SVO-Probes Dataset for Verb Understanding☆31Jan 28, 2022Updated 4 years ago
- ☆29Jun 10, 2024Updated last year
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆33Jan 26, 2026Updated 2 weeks ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆31May 29, 2023Updated 2 years ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆150Oct 2, 2025Updated 4 months ago
- [NeurIPS'25] MLLM-CompBench evaluates the comparative reasoning of MLLMs with 40K image pairs and questions across 8 dimensions of relati…☆41Apr 21, 2025Updated 9 months ago
- Data repository for the VALSE benchmark.☆37Feb 15, 2024Updated last year
- Learning to compose soft prompts for compositional zero-shot learning.☆93Sep 13, 2025Updated 5 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆89Feb 13, 2024Updated 2 years ago
- ✍️ A browser add-on (Firefox, Chrome, Thunderbird) that allows you to autocorrect common text sequences and convert text characters to a …☆12Updated this week
- ☆11Jun 7, 2023Updated 2 years ago
- Inspirational post ids collected from Reddit using pushift.io and RoBERTa☆10Jan 18, 2024Updated 2 years ago
- App to keep track of promises☆12Jan 13, 2017Updated 9 years ago
- [CVPR2025] Official code for Lost in Translation Found in Context☆23Jan 14, 2026Updated last month
- Code for Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training Model☆13Feb 15, 2024Updated 2 years ago
- Improving Continuous Sign Language Recognition with Adapted Image Models☆14Nov 10, 2025Updated 3 months ago
- [CVPR 2024 Highlight] ImageNet-D☆46Oct 15, 2024Updated last year
- Jax implementation of VIT-VQGAN☆10Jan 25, 2024Updated 2 years ago
- ☆14Apr 15, 2025Updated 9 months ago
- ☆11Nov 23, 2024Updated last year
- ☆13May 11, 2016Updated 9 years ago