haochenheheda / segment-anything-annotator
We developed a python UI based on labelme and segment-anything for pixel-level annotation. It support multiple masks generation by SAM(box/point prompt), efficient polygon modification and category record. We will add more features (such as incorporating CLIP-based methods for category proposal and VOS methods for video datasets
☆355Updated last year
Alternatives and similar repositories for segment-anything-annotator:
Users that are interested in segment-anything-annotator are comparing it to the libraries listed below
- Fine-tune Segment-Anything Model with Lightning Fabric.☆519Updated 10 months ago
- Fine-tune SAM (Segment Anything Model) for computer vision tasks such as semantic segmentation, matting, detection ... in specific scena…☆801Updated last year
- [ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆465Updated last month
- A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.☆339Updated last month
- Downstream-Dino-V2: A GitHub repository featuring an easy-to-use implementation of the DINOv2 model by Facebook for downstream tasks such…☆218Updated last year
- [CVPR 2023] Official implementation of the paper "Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segme…☆1,252Updated last year
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆681Updated last year
- This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detectio…☆508Updated 7 months ago
- Simple Finetuning Starter Code for Segment Anything☆129Updated last year
- Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts☆1,111Updated last month
- using clip and sam to segment any instance you specify with text prompt of any instance names☆173Updated last year
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆430Updated 9 months ago
- The repository provides code for training/fine tune the Meta Segment Anything Model 2 (SAM 2)☆174Updated 4 months ago
- CoRL 2024☆368Updated 3 months ago
- This repository is for the first comprehensive survey on Meta AI's Segment Anything Model (SAM).☆881Updated this week
- Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.☆354Updated last year
- Segment-anything related awesome extensions/projects/repos.☆343Updated last year
- A central hub for gathering and showcasing amazing projects that extend OpenMMLab with SAM and other exciting features.☆1,157Updated 10 months ago
- SimpleClick: Interactive Image Segmentation with Simple Vision Transformers (ICCV 2023)☆217Updated 8 months ago
- Official implementation of "Segment Any Anomaly without Training via Hybrid Prompt Regularization (SAA+)".☆765Updated last year
- Fine tuning grounding Dino☆83Updated last month
- Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works☆185Updated 3 months ago
- This is an implementation of zero-shot instance segmentation using Segment Anything.☆307Updated last year
- Code release for our CVPR 2023 paper "Detecting Everything in the Open World: Towards Universal Object Detection".☆555Updated last year
- Open-vocabulary Semantic Segmentation☆325Updated 3 months ago
- [CVPR2023] FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation☆187Updated 11 months ago
- yolov8 model with SAM meta☆125Updated last year
- 利用Segment Anything(SAM)模型进行快速标注☆212Updated last month
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆707Updated last year
- Combining Segment Anything (SAM) with Grounded DINO for zero-shot object detection and CLIPSeg for zero-shot segmentation☆391Updated 8 months ago