[Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
☆465Mar 1, 2025Updated last year
Alternatives and similar repositories for CLIP_Surgery
Users that are interested in CLIP_Surgery are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆471Sep 19, 2022Updated 3 years ago
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆182Oct 10, 2024Updated last year
- [CVPR 2023] CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation☆210Sep 16, 2024Updated last year
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆141Dec 2, 2023Updated 2 years ago
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆831Jan 20, 2026Updated last month
- Open-vocabulary Semantic Segmentation☆374Oct 16, 2024Updated last year
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆97Mar 26, 2025Updated 11 months ago
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆111Mar 26, 2025Updated 11 months ago
- Official implementation of TagAlign☆35Dec 11, 2024Updated last year
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆990Dec 24, 2025Updated 2 months ago
- [NeurIPS 2023] This repo contains the code for our paper Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convoluti…☆337Feb 5, 2024Updated 2 years ago
- Open-vocabulary Semantic Segmentation☆184Mar 28, 2023Updated 2 years ago
- Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.☆386Apr 5, 2023Updated 2 years ago
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆108Jan 9, 2024Updated 2 years ago
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259May 3, 2024Updated last year
- CLIP-AD is an upgraded version of the zero-shot anomaly detection method we proposed for the VAND challenge.☆41Mar 5, 2024Updated last year
- PyTorch implementation of ICML 2023 paper "SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation"☆100Jun 28, 2023Updated 2 years ago
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆752Oct 17, 2023Updated 2 years ago
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆275Oct 26, 2024Updated last year
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆1,028Aug 4, 2025Updated 6 months ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆148Apr 21, 2024Updated last year
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆201Feb 5, 2024Updated 2 years ago
- An open source implementation of CLIP.☆13,430Updated this week
- [TIP 2025] Self-Calibrated CLIP for Training-Free Open-Vocabulary Segmentation☆58Dec 22, 2025Updated 2 months ago
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"☆293Jun 19, 2025Updated 8 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆869Jul 20, 2025Updated 7 months ago
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆935Jul 6, 2024Updated last year
- Official Implementation of "CAT-Seg🐱: Cost Aggregation for Open-Vocabulary Semantic Segmentation"☆362Apr 11, 2024Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆381Jun 1, 2023Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,179May 20, 2024Updated last year
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆137Apr 10, 2025Updated 10 months ago
- [CVPR 2024 Highlight] Official repository of the paper "The devil is in the fine-grained details: Evaluating open-vocabulary object detec…☆66Apr 4, 2025Updated 10 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Dec 16, 2022Updated 3 years ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆285Sep 28, 2023Updated 2 years ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,811Nov 27, 2025Updated 3 months ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Mar 20, 2024Updated last year
- ☆27Jan 25, 2024Updated 2 years ago
- [ECCV'24] Official PyTorch implementation of In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation☆49Sep 24, 2024Updated last year
- [ICCV2025] Harnessing CLIP, DINO and SAM for Open Vocabulary Segmentation☆107Nov 22, 2025Updated 3 months ago