arampacha / CLIP-rsicdLinks
☆230Updated last month
Alternatives and similar repositories for CLIP-rsicd
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
Sorting:
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆208Updated 3 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆199Updated last year
- RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆281Updated 6 months ago
- A list of awesome remote sensing image captioning resources☆117Updated this week
- 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆446Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆790Updated last year
- Collection of Remote Sensing Vision-Language Models☆141Updated last year
- Robust fine-tuning of zero-shot models☆740Updated 3 years ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆440Updated 6 months ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆458Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆666Updated 3 years ago
- This repo contains the official implementation of ICCV 2023 paper "Keep It SimPool: Who Said Supervised Transformers Suffer from Attentio…☆99Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Updated 3 years ago
- Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"☆206Updated last month
- Implementation code of the work "Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning"☆93Updated 8 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆221Updated 2 years ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆771Updated 3 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆173Updated 3 years ago
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆185Updated 2 years ago
- Awesome-Remote-Sensing-Vision-Language-Models☆181Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆712Updated 3 years ago
- ☆123Updated 7 months ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆195Updated 2 years ago
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆49Updated 2 years ago
- A curated list of awesome vision and language resources for earth observation.☆248Updated 6 months ago
- [ECCV'22] Official repository of paper titled "Class-agnostic Object Detection with Multi-modal Transformer".☆313Updated 2 years ago
- Official repo for "SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote Sensing"☆183Updated 9 months ago
- Official PyTorch implementation and benchmark dataset for IGARSS 2024 ORAL paper: "Composed Image Retrieval for Remote Sensing"☆79Updated 9 months ago
- Make your models invariant to changes in scale.☆155Updated last year