ytaek-oh / awesome-vl-compositionalityLinks
Awesome Vision-Language Compositionality, a comprehensive curation of research papers in literature.
☆30Updated 8 months ago
Alternatives and similar repositories for awesome-vl-compositionality
Users that are interested in awesome-vl-compositionality are comparing it to the libraries listed below
Sorting:
- NegCLIP.☆37Updated 2 years ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Updated 6 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆65Updated last month
- Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".☆30Updated 7 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆52Updated 7 months ago
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite fo…☆50Updated last year
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆43Updated 2 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆37Updated last year
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆35Updated 7 months ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆64Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆91Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆171Updated last year
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆199Updated 3 years ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆21Updated 10 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆31Updated last year
- Code for paper: Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection☆46Updated 7 months ago
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆167Updated 2 years ago
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆57Updated last year
- ☆20Updated 3 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆65Updated 8 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆86Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆58Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆50Updated last year
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆108Updated last year
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆54Updated 6 months ago
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆73Updated 9 months ago
- [ICCV 2023] With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning.☆19Updated last year
- ☆16Updated last year