arampacha / CLIP-rsicdLinks
☆225Updated 3 years ago
Alternatives and similar repositories for CLIP-rsicd
Users that are interested in CLIP-rsicd are comparing it to the libraries listed below
Sorting:
- Datasets for remote sensing images (Paper:Exploring Models and Data for Remote Sensing Image Caption Generation)☆202Updated 3 years ago
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆292Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆400Updated last year
- RS5M: a large-scale vision language dataset for remote sensing [TGRS]☆262Updated 3 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆218Updated 2 years ago
- CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆425Updated 3 months ago
- A list of awesome remote sensing image captioning resources☆110Updated last week
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆315Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆657Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆197Updated last year
- Robust fine-tuning of zero-shot models☆717Updated 3 years ago
- 🛰️ Official repository of paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing" (IEEE TGRS)☆415Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆276Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆167Updated last week
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆698Updated 3 years ago
- [ECCV'22] Official repository of paper titled "Class-agnostic Object Detection with Multi-modal Transformer".☆311Updated 2 years ago
- Collection of Remote Sensing Vision-Language Models☆137Updated last year
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆177Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆769Updated 2 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆242Updated 2 weeks ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆773Updated last year
- ☆120Updated 2 years ago
- ☆624Updated last year
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆193Updated 2 years ago
- A curated list of awesome vision and language resources for earth observation.☆230Updated 3 months ago
- An official PyTorch implementation of the CRIS paper☆273Updated last year
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆168Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆394Updated 2 years ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆765Updated 3 years ago
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆193Updated 2 years ago