McGill-NLP / diffusion-itmLinks
Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"
☆32Updated last year
Alternatives and similar repositories for diffusion-itm
Users that are interested in diffusion-itm are comparing it to the libraries listed below
Sorting:
- Compress conventional Vision-Language Pre-training data☆51Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 6 months ago
- NegCLIP.☆32Updated 2 years ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆50Updated last month
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆39Updated last year
- VisualGPTScore for visio-linguistic reasoning☆27Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 10 months ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated 2 years ago
- [CVPR 2023] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆61Updated 3 months ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).