autodistill / autodistill-grounded-edgesam
EdgeSAM model for use with Autodistill.
☆26Updated 10 months ago
Alternatives and similar repositories for autodistill-grounded-edgesam:
Users that are interested in autodistill-grounded-edgesam are comparing it to the libraries listed below
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆35Updated last year
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆63Updated 7 months ago
- ☆23Updated 5 months ago
- Python scripts performing optical flow estimation using the NeuFlowV2 model in ONNX.☆41Updated 6 months ago
- Odd-One-Out: Anomaly Detection by Comparing with Neighbors (CVPR25)☆33Updated 4 months ago
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing☆67Updated 10 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆119Updated 8 months ago
- Tracking through Containers and Occluders in the Wild (CVPR 2023) - Official Implementation☆41Updated 10 months ago
- Repo for event-based binary image reconstruction.☆32Updated last year
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.☆66Updated last year
- Official Code for "MITracker: Multi-View Integration for Visual Object Tracking"☆53Updated 2 weeks ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 8 months ago
- Find First, Track Next: Decoupling Identification and Propagation in Referring Video Object Segmentation☆58Updated 2 weeks ago
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆87Updated last year
- SAM-CLIP module for use with Autodistill.☆15Updated last year
- ☆39Updated 2 months ago
- A simple demo for utilizing grounding dino and segment anything v2 models together☆19Updated 8 months ago
- ☆61Updated this week
- Eye exploration☆25Updated 2 months ago
- ☆36Updated last year
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta