arampacha / CLIP-rsicdLinks
☆235Updated 5 months ago
Alternatives and similar repositories for CLIP-rsicd
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
Sorting:
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆222Updated 4 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆406Updated 2 years ago
- RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆297Updated 10 months ago
- A list of awesome remote sensing image captioning resources☆117Updated last week
- Collection of Remote Sensing Vision-Language Models☆142Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆202Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆672Updated 3 years ago
- 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆509Updated last year
- Awesome-Remote-Sensing-Vision-Language-Models☆190Updated last year
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆460Updated 10 months ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆190Updated 2 years ago
- Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"☆226Updated 5 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Updated 3 years ago
- Implementation code of the work "Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning"☆94Updated last year
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆50Updated 3 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Updated 7 months ago
- This repo contains the official implementation of ICCV 2023 paper "Keep It SimPool: Who Said Supervised Transformers Suffer from Attentio…☆99Updated 2 years ago
- Official repo for "SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote Sensing"☆194Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆719Updated 3 years ago
- ☆129Updated last year
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆462Updated 3 years ago
- Robust fine-tuning of zero-shot models☆758Updated 3 years ago
- The first research for semantic localization☆29Updated 2 years ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆780Updated 3 years ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆198Updated 2 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆177Updated 3 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆805Updated last year
- Make your models invariant to changes in scale.☆157Updated last year
- Changes to Captions: An Attentive Network for Remote Sensing Change Captioning☆78Updated 2 years ago