maxi-w / CLIP-SAM
Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.
☆342Updated last year
Related projects ⓘ
Alternatives and complementary repositories for CLIP-SAM
- CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆361Updated last year
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆653Updated 9 months ago
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆690Updated last year
- [ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆448Updated 3 months ago
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆405Updated 2 years ago
- Segment Anything combined with CLIP☆331Updated 9 months ago
- Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works☆178Updated last month
- Open-vocabulary Semantic Segmentation☆315Updated last month
- Grounded Segment Anything: From Objects to Parts☆388Updated last year
- Segment-anything related awesome extensions/projects/repos.☆343Updated last year
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆393Updated 7 months ago
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆858Updated 4 months ago
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"☆271Updated 8 months ago
- A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.