YulongBonjour / SimVLMLinks
SimVLM ---SIMPLE VISUAL LANGUAGE MODEL PRETRAINING WITH WEAK SUPERVISION
☆36Updated 2 years ago
Alternatives and similar repositories for SimVLM
Users that are interested in SimVLM are comparing it to the libraries listed below
Sorting:
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆43Updated 3 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆266Updated last year
- ☆46Updated 3 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆207Updated 2 years ago
- MixGen: A New Multi-Modal Data Augmentation☆127Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆91Updated 2 years ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆165Updated last year
- ☆106Updated 3 years ago
- natual language guided image captioning☆85Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- Official implementation of "ConZIC: Controllable Zero-shot Image Captioning by Sampling-Based Polishing"☆74Updated 2 years ago
- implementation of paper https://arxiv.org/abs/2210.04559☆55Updated 3 years ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆35Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆198Updated last year
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 2 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated 2 years ago
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"☆34Updated 2 years ago
- A curated list of vision-and-language pre-training (VLP). :-)☆59Updated 3 years ago
- ☆84Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆187Updated 5 months ago
- CLIP4IDC: CLIP for Image Difference Captioning (AACL 2022)☆36Updated 2 years ago
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)☆85Updated 3 years ago
- ☆22Updated 3 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆57Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone☆129Updated 2 years ago
- Recent Advances in Visual Dialog☆30Updated 3 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆32Updated 2 years ago