ytaek-oh / vl_compoView external linksLinks
☆10Jul 5, 2024Updated last year
Alternatives and similar repositories for vl_compo
Users that are interested in vl_compo are comparing it to the libraries listed below
Sorting:
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆19Jun 27, 2024Updated last year
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆50Jun 16, 2025Updated 7 months ago
- Awesome Vision-Language Compositionality, a comprehensive curation of research papers in literature.☆34Feb 13, 2025Updated last year
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- Project for SNARE benchmark☆11Jun 5, 2024Updated last year
- Collaborative retina modelling across datasets and species.☆17Feb 5, 2026Updated last week
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆15Jun 3, 2025Updated 8 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Nov 23, 2024Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆37Aug 18, 2024Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Jan 28, 2022Updated 4 years ago
- VisualGPTScore for visio-linguistic reasoning☆27Oct 7, 2023Updated 2 years ago
- ☆14Dec 31, 2024Updated last year
- This is the repository for "SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Recognition"☆16Oct 8, 2024Updated last year
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Nov 29, 2023Updated 2 years ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆31May 29, 2023Updated 2 years ago
- [EMNLP 2024] IFCap: Image-like Retrieval and Frequency-based Entity Filtering for Zero-shot Captioning☆15May 13, 2025Updated 9 months ago
- ☆37Oct 7, 2023Updated 2 years ago
- Enhancing Multimodal Compositional Reasoning of Visual Language Models with Generative Negative Mining, WACV 2024☆14Jan 3, 2024Updated 2 years ago
- Data repository for the VALSE benchmark.☆37Feb 15, 2024Updated last year
- ☆20Apr 23, 2024Updated last year
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆22Oct 11, 2025Updated 4 months ago
- [NeurIPS24] VisMin: Visual Minimal-Change Understanding☆19Mar 3, 2025Updated 11 months ago
- ☆17Dec 13, 2023Updated 2 years ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- Official Code for the NeurIPS'23 paper "3D-Aware Visual Question Answering about Parts, Poses and Occlusions"☆19Oct 17, 2024Updated last year
- ☆16Jan 3, 2023Updated 3 years ago
- SIEVE: Multimodal Dataset Pruning using Image-Captioning Models (CVPR 2024)☆18Apr 28, 2024Updated last year
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆48Sep 25, 2023Updated 2 years ago
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆19Feb 14, 2025Updated last year
- This repo contains the code for our paper Compositor: Bottom-Up Clustering and Compositing for Robust Part and Object Segmentation☆17Mar 20, 2025Updated 10 months ago
- An Examination of the Compositionality of Large Generative Vision-Language Models☆19Apr 9, 2024Updated last year
- [ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"☆21Mar 26, 2025Updated 10 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆61Dec 10, 2024Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆55Apr 7, 2025Updated 10 months ago
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).☆27Jan 26, 2025Updated last year
- baseline mode for the ObjectNet competition☆18Jan 13, 2021Updated 5 years ago
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Jan 11, 2024Updated 2 years ago
- Repository for the PopulAtion Parameter Averaging (PAPA) paper☆30Apr 11, 2024Updated last year