Aldenhovel / bleu-rouge-meteor-cider-spice-eval4imagecaptionLinks
Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.
☆31Updated 2 years ago
Alternatives and similar repositories for bleu-rouge-meteor-cider-spice-eval4imagecaption
Users that are interested in bleu-rouge-meteor-cider-spice-eval4imagecaption are comparing it to the libraries listed below
Sorting:
- Code to train CLIP model☆123Updated 3 years ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆160Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆199Updated last year
- SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation☆121Updated last year
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆43Updated 3 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆294Updated 2 years ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆165Updated last year
- MixGen: A New Multi-Modal Data Augmentation☆126Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆286Updated last year
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆265Updated 11 months ago
- ☆59Updated 2 years ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆127Updated last year
- [CVPR 2023 & IJCV 2025] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆64Updated last month
- [CVPR23] A cascaded diffusion captioning model with a novel semantic-conditional diffusion process that upgrades conventional diffusion m…☆64Updated last year
- A curated list of vision-and-language pre-training (VLP). :-)☆59Updated 3 years ago
- A Survey on multimodal learning research.☆330Updated 2 years ago
- ☆62Updated 2 years ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆172Updated last year
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated 2 years ago
- An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.☆133Updated 8 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆206Updated 2 years ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆195Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆176Updated 2 months ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆35Updated last year
- The official implementation of 'Align and Attend: Multimodal Summarization with Dual Contrastive Losses' (CVPR 2023)☆79Updated 2 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆111Updated 3 years ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆137Updated 2 years ago
- code for studying OpenAI's CLIP explainability☆34Updated 3 years ago
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆290Updated last year