b-hahn / CLIPLinks
FInetuning CLIP for Few Shot Learning
☆46Updated 4 years ago
Alternatives and similar repositories for CLIP
Users that are interested in CLIP are comparing it to the libraries listed below
Sorting:
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆106Updated 2 years ago
- Source code of the paper Fine-Grained Visual Classification via Internal Ensemble Learning Transformer☆55Updated last year
- PyTorch implementation of ICML 2023 paper "SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation"☆99Updated 2 years ago
- Official implementation of "Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer".☆130Updated last year
- A DETR-style framework for open-vocabulary detection (OVD). CVPR 2023☆198Updated 2 years ago
- [CVPR 2023] CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation☆210Updated last year
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆141Updated 2 years ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆148Updated last year
- The official implementation of paper: "Inter-Instance Similarity Modeling for Contrastive Learning"☆117Updated last year
- Few-shot Object Counting and Detection (ECCV 2022)☆83Updated last year
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆132Updated 2 months ago
- Pytorch implementation of "Fine-grained Visual Classification with High-temperature Refinement and Background Suppression"☆114Updated 2 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Updated 3 years ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆461Updated 11 months ago
- Official repo for our ICML 23 paper: "Multi-Modal Classifiers for Open-Vocabulary Object Detection"☆95Updated 2 years ago
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259Updated last year
- code for studying OpenAI's CLIP explainability☆38Updated 4 years ago
- [CVPR 2024 Highlight] Official repository of the paper "The devil is in the fine-grained details: Evaluating open-vocabulary object detec…☆66Updated 10 months ago
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆104Updated 2 years ago
- Awesome List of Vision Language Prompt Papers☆46Updated 2 years ago
- PA-SAM: Prompt Adapter SAM for High-quality Image Segmentation☆97Updated last year
- Implementation for "DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations" (NeurIPS 2022))☆71Updated 2 years ago
- [CVPR2024] Generative Region-Language Pretraining for Open-Ended Object Detection☆190Updated 10 months ago
- ☆83Updated 2 years ago
- Code for "DAMEX: Dataset-aware Mixture-of-Experts for visual understanding of mixture-of-datasets", accepted at Neurips 2023 (Main confer…☆27Updated last year
- ☆267Updated 3 years ago
- [ACM MM23] CLIP-Count: Towards Text-Guided Zero-Shot Object Counting☆123Updated last year
- CVPR2024☆104Updated 10 months ago
- GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection (AAAI 2024)☆72Updated 2 years ago
- This repository lists some awesome public projects about Zero-shot/Few-shot Learning based on CLIP (Contrastive Language-Image Pre-Traini…☆27Updated last year