arampacha / CLIP-rsicd
☆215Updated 3 years ago
Alternatives and similar repositories for CLIP-rsicd:
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆191Updated 3 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆391Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆190Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆645Updated 2 years ago
- RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆233Updated 4 months ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆272Updated 2 years ago
- CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆388Updated last week
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆188Updated last year
- 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆347Updated 7 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆211Updated 2 years ago
- A list of awesome remote sensing image captioning resources☆97Updated this week
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆679Updated 2 years ago
- Robust fine-tuning of zero-shot models☆669Updated 2 years ago
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆172Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆308Updated 8 months ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆144Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆266Updated last year
- Probing the representations of Vision Transformers.☆320Updated 2 years ago
- ☆50Updated 7 months ago
- ☆117Updated 2 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆241Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆159Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆134Updated last year
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆410Updated 2 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆160Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆204Updated 2 years ago
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆48Updated 2 years ago
- Collection of Remote Sensing Vision-Language Models☆128Updated 9 months ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆380Updated 2 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆465Updated 2 years ago