arampacha / CLIP-rsicdLinks
☆234Updated 6 months ago
Alternatives and similar repositories for CLIP-rsicd
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
Sorting:
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆223Updated 4 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆405Updated 2 years ago
- RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆297Updated 10 months ago
- A list of awesome remote sensing image captioning resources☆118Updated this week
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Updated last year
- Collection of Remote Sensing Vision-Language Models☆142Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Updated 3 years ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆461Updated 11 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆673Updated 3 years ago
- 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆516Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆203Updated 2 years ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆463Updated 3 years ago
- Awesome-Remote-Sensing-Vision-Language-Models☆191Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆719Updated 3 years ago
- Robust fine-tuning of zero-shot models☆759Updated 3 years ago
- ☆130Updated last year
- Official repo for "SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote Sensing"☆196Updated last year
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆780Updated 3 years ago
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆192Updated 2 years ago
- Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"☆231Updated 5 months ago
- ☆61Updated last year
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆297Updated 3 years ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆356Updated 6 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Updated 3 years ago
- Changes to Captions: An Attentive Network for Remote Sensing Change Captioning☆78Updated 2 years ago
- This repo contains the official implementation of ICCV 2023 paper "Keep It SimPool: Who Said Supervised Transformers Suffer from Attentio…☆99Updated 2 years ago
- Official code for TEOChat, the first vision-language assistant for temporal earth observation data (ICLR 2025).☆135Updated 2 months ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆178Updated 3 years ago
- Make your models invariant to changes in scale.☆158Updated last year
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆50Updated 3 years ago