arampacha / CLIP-rsicdLinks
☆233Updated 3 months ago
Alternatives and similar repositories for CLIP-rsicd
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
Sorting:
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated 2 years ago
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆214Updated 3 years ago
- RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆288Updated 8 months ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆202Updated last year
- A list of awesome remote sensing image captioning resources☆117Updated last month
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆797Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆716Updated 3 years ago
- Implementation code of the work "Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning"☆94Updated 10 months ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆447Updated 8 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆669Updated 3 years ago
- ☆61Updated last year
- 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆479Updated last year
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆197Updated 2 years ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆776Updated 3 years ago
- Collection of Remote Sensing Vision-Language Models☆142Updated last year
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Updated 5 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆223Updated 2 years ago
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆296Updated 3 years ago
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆50Updated 2 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆175Updated 3 years ago
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆188Updated 2 years ago
- Robust fine-tuning of zero-shot models☆750Updated 3 years ago
- Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"☆219Updated 3 months ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆463Updated 3 years ago
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆219Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆357Updated 4 months ago
- [ECCV'22] Official repository of paper titled "Class-agnostic Object Detection with Multi-modal Transformer".☆314Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆783Updated 2 years ago
- CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS☆140Updated 3 years ago