aimagelab / COGTLinks
[ICLR 2025] Causal Graphical Models for Vision-Language Compositional Understanding
☆9Updated 2 months ago
Alternatives and similar repositories for COGT
Users that are interested in COGT are comparing it to the libraries listed below
Sorting:
- ☆12Updated 5 months ago
- ☆10Updated 2 months ago
- [CVPR 2025] Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering☆35Updated 2 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 3 months ago
- [ECCV 2024] Official repository for "DataDream: Few-shot Guided Dataset Generation"☆40Updated 11 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 7 months ago
- ☆22Updated last year
- [BMVC 2024 Oral ✨] Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization☆18Updated 9 months ago
- [CVPR 2023] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆62Updated 3 months ago
- Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models☆16Updated 3 weeks ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆43Updated last week
- Official Repository of Personalized Visual Instruct Tuning☆29Updated 3 months ago
- ☆10Updated 11 months ago
- Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning☆23Updated last month
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examples☆23Updated 7 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆41Updated 6 months ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆16Updated 8 months ago
- ☆31Updated 9 months ago
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆35Updated last month
- [NeurIPS 2024] Mixture of Experts for Audio-Visual Learning☆15Updated 5 months ago
- ☆14Updated 6 months ago
- This is a repository contains the implementation of our NeurIPS'24 paper "Temporal Sentence Grounding with Relevance Feedback in Videos"☆10Updated 6 months ago
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆21Updated 3 months ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆35Updated 10 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆43Updated 5 months ago
- ☆42Updated 7 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated 2 years ago
- ☆11Updated 8 months ago
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆32Updated last year
- ☆15Updated 7 months ago