keio-smilab24 / PolosLinks
[CVPR24 Highlights] Polos: Multimodal Metric Learning from Human Feedback for Image Captioning
☆31Updated 5 months ago
Alternatives and similar repositories for Polos
Users that are interested in Polos are comparing it to the libraries listed below
Sorting:
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆61Updated 2 years ago
- [CVPR 2023 & IJCV 2025] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆64Updated 3 months ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- https://arxiv.org/abs/2209.15162☆52Updated 2 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 11 months ago
- ☆53Updated 2 months ago
- Compress conventional Vision-Language Pre-training data☆52Updated 2 years ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated last year
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆41Updated 5 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆86Updated last year
- ☆53Updated 3 years ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆36Updated last year
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆44Updated last year
- ☆35Updated last year
- ☆37Updated 2 years ago
- ☆61Updated 2 years ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆49Updated 4 months ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆178Updated last year
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆57Updated last year
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆100Updated 7 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆37Updated 6 months ago
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆33Updated last year
- [ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"☆21Updated 7 months ago
- Official repository for the MMFM challenge☆25Updated last year
- FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions☆55Updated last year
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆43Updated 5 months ago
- ☆30Updated 2 years ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆36Updated last year
- ☆120Updated 2 years ago