arampacha / CLIP-rsicdLinks
☆233Updated 2 months ago
Alternatives and similar repositories for CLIP-rsicd
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
Sorting:
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆212Updated 3 years ago
 - [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated last year
 - RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆286Updated 7 months ago
 - A list of awesome remote sensing image captioning resources☆116Updated 3 weeks ago
 - Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
 - CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆201Updated last year
 - CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆223Updated 2 years ago
 - 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆462Updated last year
 - Collection of Remote Sensing Vision-Language Models☆141Updated last year
 - Implementation code of the work "Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning"☆92Updated 10 months ago
 - Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"☆214Updated 2 months ago
 - Official repo for "SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote Sensing"☆187Updated 10 months ago
 - [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆447Updated 8 months ago
 - ☆48Updated 4 years ago
 - Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆50Updated 2 years ago
 - Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆667Updated 3 years ago
 - A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆714Updated 3 years ago
 - Robust fine-tuning of zero-shot models☆744Updated 3 years ago
 - This repo contains the official implementation of ICCV 2023 paper "Keep It SimPool: Who Said Supervised Transformers Suffer from Attentio…☆99Updated last year
 - ☆123Updated 9 months ago
 - Awesome-Remote-Sensing-Vision-Language-Models☆185Updated last year
 - [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆793Updated last year
 - ☆60Updated last year
 - Fine-tuning OpenAI CLIP Model for Image Search on medical images☆76Updated 3 years ago
 - An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆175Updated 3 years ago
 - Official code for TEOChat, the first vision-language assistant for temporal earth observation data (ICLR 2025).☆122Updated 5 months ago
 - [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆187Updated 2 years ago
 - Data set for the IEEE TGRS paper "Mutual Attention Inception Network for Remote Sensing Visual Question Answering"☆22Updated 2 years ago
 - Make your models invariant to changes in scale.☆157Updated last year
 - GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆196Updated 2 years ago