arampacha / CLIP-rsicdLinks
☆225Updated 3 years ago
Alternatives and similar repositories for CLIP-rsicd
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
Sorting:
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆203Updated 3 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆401Updated last year
- RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆267Updated 4 months ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆432Updated 4 months ago
- A list of awesome remote sensing image captioning resources☆111Updated last week
- 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆423Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆197Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Updated 2 years ago
- Robust fine-tuning of zero-shot models☆722Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆660Updated 2 years ago
- Collection of Remote Sensing Vision-Language Models☆138Updated last year
- This repo contains the official implementation of ICCV 2023 paper "Keep It SimPool: Who Said Supervised Transformers Suffer from Attentio…☆99Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆219Updated 2 years ago
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆292Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆777Updated last year
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆48Updated 2 years ago
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆180Updated last year
- Official PyTorch implementation and benchmark dataset for IGARSS 2024 ORAL paper: "Composed Image Retrieval for Remote Sensing"☆78Updated 6 months ago
- Official PyTorch implementation of the WACV 2025 Oral paper "Composed Image Retrieval for Training-FREE DOMain Conversion".☆44Updated 3 months ago
- ☆56Updated last year
- ☆120Updated 5 months ago
- Official repo for "SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote Sensing"☆179Updated 7 months ago
- Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"☆199Updated 10 months ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆169Updated 2 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆243Updated last month
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆702Updated 3 years ago
- An official PyTorch implementation of the CRIS paper☆273Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆169Updated 3 weeks ago
- Awesome-Remote-Sensing-Vision-Language-Models☆171Updated last year
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆194Updated 2 years ago