BatsResearch / ex2
If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions
☆16Updated last year
Alternatives and similar repositories for ex2:
Users that are interested in ex2 are comparing it to the libraries listed below
- Follow-Up Differential Descriptions: Language Models Resolve Ambiguities for Image Classification☆11Updated last year
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- ☆29Updated 2 years ago
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated last year
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- Code for T-MARS data filtering☆35Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 5 months ago
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆11Updated 2 months ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆20Updated last year
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆38Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆57Updated last year
- [ACL 2023] Delving into the Openness of CLIP☆23Updated 2 years ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆32Updated last year
- ☆10Updated 6 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated 2 years ago
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- ☆17Updated 9 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆28Updated 2 weeks ago
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆14Updated 5 months ago
- Public code repo for EMNLP 2024 Findings paper "MACAROON: Training Vision-Language Models To Be Your Engaged Partners"☆14Updated 7 months ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated 11 months ago
- Code to reproduce the experiments in the paper: Does CLIP Bind Concepts? Probing Compositionality in Large Image Models.☆14Updated last year
- ☆32Updated last year
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆38Updated last year
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆12Updated 4 months ago
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆13Updated 3 months ago
- ☆26Updated 3 years ago