arampacha / CLIP-rsicdLinks
☆229Updated 3 weeks ago
Alternatives and similar repositories for CLIP-rsicd
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
Sorting:
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆208Updated 3 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated last year
- RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆277Updated 5 months ago
- A list of awesome remote sensing image captioning resources☆114Updated last week
- 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆439Updated last year
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆438Updated 6 months ago
- Collection of Remote Sensing Vision-Language Models☆139Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆220Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆198Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆784Updated last year
- Implementation code of the work "Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning"☆93Updated 8 months ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Updated 2 years ago
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆294Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆666Updated 2 years ago
- ☆121Updated 7 months ago
- ☆57Updated last year
- ☆46Updated 4 years ago
- This repo contains the official implementation of ICCV 2023 paper "Keep It SimPool: Who Said Supervised Transformers Suffer from Attentio…☆99Updated last year
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆181Updated last year
- Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"☆205Updated 3 weeks ago
- Robust fine-tuning of zero-shot models☆730Updated 3 years ago
- Official PyTorch implementation and benchmark dataset for IGARSS 2024 ORAL paper: "Composed Image Retrieval for Remote Sensing"☆79Updated 8 months ago
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆48Updated 2 years ago
- [ECCV'22] Official repository of paper titled "Class-agnostic Object Detection with Multi-modal Transformer".☆312Updated 2 years ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆375Updated 2 years ago
- Official repo for "SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote Sensing"☆180Updated 8 months ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆196Updated 2 years ago
- Make your models invariant to changes in scale.☆153Updated last year
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆178Updated last year
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆172Updated 3 years ago