eric-ai-lab / ComCLIPLinks
Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"
☆37Updated last year
Alternatives and similar repositories for ComCLIP
Users that are interested in ComCLIP are comparing it to the libraries listed below
Sorting:
- ☆35Updated last year
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated 2 years ago
- ☆40Updated last year
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆50Updated 6 months ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Updated 2 years ago
- An official PyTorch implementation for CLIPPR☆29Updated 2 years ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆19Updated last year
- Official repo for the TMLR paper "Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners"☆30Updated last year
- ☆37Updated 2 years ago
- ☆30Updated 2 years ago
- ☆58Updated last year
- Code for paper: Unified Text-to-Image Generation and Retrieval☆16Updated last year
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆57Updated 2 years ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated 2 years ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆61Updated 2 years ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆101Updated 9 months ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆19Updated last year
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆42Updated 8 months ago
- Official Repository of Personalized Visual Instruct Tuning☆33Updated 9 months ago
- Training code for CLIP-FlanT5☆30Updated last year
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 9 months ago
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆69Updated 8 months ago
- ☆46Updated last year
- [CVPR 2023 & IJCV 2025] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆64Updated 4 months ago
- Official implementation of the paper The Hidden Language of Diffusion Models☆77Updated last year
- ☆53Updated 11 months ago