xmed-lab / CLIP_SurgeryLinks
[Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
☆447Updated 8 months ago
Alternatives and similar repositories for CLIP_Surgery
Users that are interested in CLIP_Surgery are comparing it to the libraries listed below
Sorting:
- Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.☆381Updated 2 years ago
- Open-vocabulary Semantic Segmentation☆363Updated last year
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆464Updated 3 years ago
- An official PyTorch implementation of the CRIS paper☆281Updated last year
- Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works☆202Updated last year
- This is Pytorch Implementation Code for adding new features in code of Segment-Anything. Here, the features support batch-input on the fu…☆165Updated last year
- [CVPR 2023] CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation☆205Updated last year
- Official implement of CVPR2023 ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation☆251Updated 2 years ago
- [NeurIPS 2023] This repo contains the code for our paper Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convoluti…☆331Updated last year
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆852Updated 3 months ago
- ☆643Updated last year
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"☆291Updated 4 months ago
- Official Implementation of "CAT-Seg🐱: Cost Aggregation for Open-Vocabulary Semantic Segmentation"☆344Updated last year
- [ICLR'24 & IJCV‘25] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆531Updated 10 months ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆512Updated last year
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆174Updated last year
- ☆554Updated 3 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆540Updated 2 years ago
- Downstream-Dino-V2: A GitHub repository featuring an easy-to-use implementation of the DINOv2 model by Facebook for downstream tasks such…☆263Updated 2 years ago
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆195Updated 2 years ago
- A collection of papers about Referring Image Segmentation.☆781Updated 2 weeks ago
- [CVPR 2023] Official code for "Zero-shot Referring Image Segmentation with Global-Local Context Features"☆128Updated 7 months ago
- ☆532Updated last year
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆427Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆796Updated last year
- [CVPR 2024] Official implementation of "VRP-SAM: SAM with Visual Reference Prompt"☆162Updated last year
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆230Updated last year
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆261Updated last year
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆734Updated last year
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆195Updated last year