marthaflinderslewis / clip-binding
Code to reproduce the experiments in the paper: Does CLIP Bind Concepts? Probing Compositionality in Large Image Models.
☆14Updated last year
Alternatives and similar repositories for clip-binding:
Users that are interested in clip-binding are comparing it to the libraries listed below
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated last year
- ☆25Updated last year
- ☆27Updated 6 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆14Updated 9 months ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated last year
- Code, data, models for the Sherlock corpus☆55Updated 2 years ago
- [ICML 2024] Fool Your (Vision and) Language Model With Embarrassingly Simple Permutations☆14Updated last year
- ☆21Updated 7 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆17Updated 7 months ago
- ☆15Updated 2 years ago
- ☆33Updated 2 months ago
- ☆12Updated 3 weeks ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆20Updated last month
- ☆10Updated 2 months ago
- visual question answering prompting recipes for large vision-language models☆23Updated 4 months ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"☆38Updated 2 years ago
- [EMNLP-2022 Findings] Code for paper “ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback”.☆25Updated last year
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆53Updated last year
- Code for Debiasing Vision-Language Models via Biased Prompts☆55Updated last year
- ☆26Updated 2 years ago
- ☆33Updated 3 years ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆38Updated 10 months ago
- PyTorch code for Improving Commonsense in Vision-Language Models via Knowledge Graph Riddles (DANCE)☆23Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆108Updated last year
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆29Updated last year