aimagelab / awesome-captioning-evaluationLinks
[IJCAI 2025] Image Captioning Evaluation in the Age of Multimodal LLMs: Challenges and Future Perspectives
☆25Updated last month
Alternatives and similar repositories for awesome-captioning-evaluation
Users that are interested in awesome-captioning-evaluation are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆88Updated last year
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated 2 years ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆288Updated 2 years ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆68Updated last year
- [CVPR24 Highlights] Polos: Multimodal Metric Learning from Human Feedback for Image Captioning☆33Updated 7 months ago
- [CVPR 2025] Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval☆32Updated 3 months ago
- [CVPR 2023 & IJCV 2025] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆64Updated 4 months ago
- Code and data for ImageCoDe, a contextual vison-and-language benchmark☆41Updated last year
- ☆70Updated last year
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆36Updated last year
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆129Updated last month
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆94Updated last year
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆188Updated 2 years ago
- Official repository for the MMFM challenge☆25Updated last year
- M-HalDetect Dataset Release☆26Updated 2 years ago
- ☆23Updated last year
- The official implementation for BLIP4CIR with bi-directional training | Bi-directional Training for Composed Image Retrieval via Text Pro…☆33Updated last year
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆43Updated 6 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆60Updated last year
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆38Updated 8 months ago
- ☆76Updated last year
- [ICLR '25] Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"☆96Updated 3 weeks ago
- [ICLR 2024] Official repository for "Vision-by-Language for Training-Free Compositional Image Retrieval"☆82Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆34Updated last year
- Official repository for the A-OKVQA dataset☆106Updated last year
- Official Pytorch implementation of LinCIR: Language-only Training of Zero-shot Composed Image Retrieval (CVPR 2024)☆139Updated last year
- ☆35Updated last year
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆22Updated 11 months ago
- NegCLIP.☆38Updated 2 years ago
- FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆32Updated last month