Aldenhovel / bleu-rouge-meteor-cider-spice-eval4imagecaption
Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.
☆29Updated 2 years ago
Alternatives and similar repositories for bleu-rouge-meteor-cider-spice-eval4imagecaption
Users that are interested in bleu-rouge-meteor-cider-spice-eval4imagecaption are comparing it to the libraries listed below
Sorting:
- Code to train CLIP model☆111Updated 3 years ago
- [CVPR 2023] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆61Updated 2 months ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆133Updated 2 years ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆49Updated last year
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆35Updated 9 months ago
- ☆61Updated last year
- [CVPR23] A cascaded diffusion captioning model with a novel semantic-conditional diffusion process that upgrades conventional diffusion m…☆63Updated 11 months ago
- natual language guided image captioning☆82Updated last year
- SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation☆108Updated last year
- ☆59Updated last year
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆157Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆47Updated 9 months ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆152Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆55Updated 3 months ago
- code for studying OpenAI's CLIP explainability☆31Updated 3 years ago
- A curated list of vision-and-language pre-training (VLP). :-)☆58Updated 2 years ago
- NegCLIP.☆31Updated 2 years ago
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated last year
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆44Updated 10 months ago
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆83Updated last year
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆44Updated last year
- implementation of paper https://arxiv.org/abs/2210.04559☆54Updated 2 years ago
- SimVLM ---SIMPLE VISUAL LANGUAGE MODEL PRETRAINING WITH WEAK SUPERVISION☆36Updated 2 years ago
- ☆84Updated 2 years ago
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆98Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆155Updated 2 years ago
- MixGen: A New Multi-Modal Data Augmentation☆122Updated 2 years ago
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆43Updated 2 years ago
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite fo…☆47Updated 8 months ago