[Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
☆469Mar 1, 2025Updated last year
Alternatives and similar repositories for CLIP_Surgery
Users that are interested in CLIP_Surgery are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆471Sep 19, 2022Updated 3 years ago
- CLIP-AD is an upgraded version of the zero-shot anomaly detection method we proposed for the VAND challenge.☆46Mar 5, 2024Updated 2 years ago
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆142Dec 2, 2023Updated 2 years ago
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆183Oct 10, 2024Updated last year
- [CVPR 2023] CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation☆213Sep 16, 2024Updated last year
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆847Jan 20, 2026Updated 2 months ago
- Open-vocabulary Semantic Segmentation☆377Oct 16, 2024Updated last year
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆112Mar 26, 2025Updated 11 months ago
- Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.☆387Apr 5, 2023Updated 2 years ago
- Open-vocabulary Semantic Segmentation☆185Mar 28, 2023Updated 2 years ago
- Official implementation of TagAlign☆37Dec 11, 2024Updated last year
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆98Mar 26, 2025Updated 11 months ago
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆110Jan 9, 2024Updated 2 years ago
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆998Dec 24, 2025Updated 2 months ago
- [NeurIPS 2023] This repo contains the code for our paper Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convoluti…☆339Feb 5, 2024Updated 2 years ago
- PyTorch implementation of ICML 2023 paper "SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation"☆101Jun 28, 2023Updated 2 years ago
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆753Oct 17, 2023Updated 2 years ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆138Apr 10, 2025Updated 11 months ago
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆1,030Aug 4, 2025Updated 7 months ago
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259May 3, 2024Updated last year
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆934Jul 6, 2024Updated last year
- Official Implementation of "CAT-Seg🐱: Cost Aggregation for Open-Vocabulary Semantic Segmentation"☆365Apr 11, 2024Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆381Jun 1, 2023Updated 2 years ago
- An open source implementation of CLIP.☆13,528Mar 12, 2026Updated last week
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆869Jul 20, 2025Updated 8 months ago
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆277Oct 26, 2024Updated last year
- [ICLR 2025] MedRegA: Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks☆45Oct 18, 2025Updated 5 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,190May 20, 2024Updated last year
- [TIP 2025] Self-Calibrated CLIP for Training-Free Open-Vocabulary Segmentation☆66Dec 22, 2025Updated 3 months ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆149Apr 21, 2024Updated last year
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆201Feb 5, 2024Updated 2 years ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆286Sep 28, 2023Updated 2 years ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,826Nov 27, 2025Updated 3 months ago
- ☆27Jan 25, 2024Updated 2 years ago
- cliptrase☆47Sep 1, 2024Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆808Mar 20, 2024Updated 2 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆223Dec 16, 2022Updated 3 years ago
- [ICCV2025] Harnessing CLIP, DINO and SAM for Open Vocabulary Segmentation☆115Nov 22, 2025Updated 4 months ago
- Grounded Language-Image Pre-training☆2,585Jan 24, 2024Updated 2 years ago