autodistill / autodistill-grounded-edgesam
EdgeSAM model for use with Autodistill.
☆25Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for autodistill-grounded-edgesam
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆34Updated last year
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆89Updated 3 months ago
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆59Updated 2 months ago
- ☆29Updated last month
- ☆22Updated 3 weeks ago
- Python scripts performing optical flow estimation using the NeuFlowV2 model in ONNX.☆32Updated last month
- [NeurIPS 2023] HASSOD: Hierarchical Adaptive Self-Supervised Object Detection☆49Updated 9 months ago
- Repo for event-based binary image reconstruction.☆30Updated 7 months ago
- Official Code for Tracking Any Object Amodally☆113Updated 3 months ago
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing☆65Updated 5 months ago
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.☆65Updated 11 months ago
- SAM-CLIP module for use with Autodistill.☆12Updated 11 months ago
- Tracking through Containers and Occluders in the Wild (CVPR 2023) - Official Implementation☆39Updated 5 months ago
- CAVIS: Context-Aware Video Instance Segmentation☆58Updated 3 weeks ago
- Code of paper "A new baseline for edge detection: Make Encoder-Decoder great again"☆29Updated this week
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆93Updated 4 months ago
- GroundedSAM Base Model plugin for Autodistill☆44Updated 6 months ago
- A simple demo for utilizing grounding dino and segment anything v2 models together☆16Updated 3 months ago
- Official code repository for paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts"☆23Updated last month
- ☆34Updated 9 months ago
- Vehicle speed estimation using YOLOv8☆30Updated 6 months ago
- ☆13Updated 11 months ago
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆36Updated 5 months ago
- ☆57Updated 7 months ago
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated this week
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆76Updated last year
- Official Training and Inference Code of Amodal Expander, Proposed in Tracking Any Object Amodally☆14Updated 3 months ago
- Accurately locating each head's position in the crowd scenes is a crucial task in the field of crowd analysis. However, traditional densi…☆20Updated 7 months ago
- Our idea is to combine the power of computer vision model and LLMs. We use YOLO, CLIP and DINOv2 to extract high-level features from imag…☆99Updated last year
- A component that allows you to annotate an image with points and boxes.☆17Updated 10 months ago