streamfog / sam2-appLinks
β56Updated last year
Alternatives and similar repositories for sam2-app
Users that are interested in sam2-app are comparing it to the libraries listed below
Sorting:
- Segment anything UI for annotationsβ112Updated last month
- Inference and fine-tuning examples for vision models from π€ Transformersβ162Updated 5 months ago
- Using the moondream VLM with optical flow for promptable object trackingβ72Updated 10 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.β134Updated last year
- CPU compatible fork of the official SAMv2 implementation aimed at more accessible and documented tutorialsβ84Updated last month
- Efficient Track Anythingβ762Updated last year
- Lightweight, open-source, high-performance Yolo implementationβ55Updated 7 months ago
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.β66Updated 2 years ago
- β401Updated last year
- Creation of annotated datasets from scratch using Generative AI and Foundation Computer Vision modelsβ132Updated 3 weeks ago
- Muggled SAM: Segmentation without the magicβ181Updated 3 weeks ago
- A tool for converting computer vision label formats.β81Updated last month
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)β364Updated last year
- Let's bake an image.β15Updated last week
- code for CVPR2024 paper: DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Predictionβ440Updated last year
- Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.β95Updated 2 weeks ago
- [CVPR 2025] Official PyTorch implementation of "EdgeTAM: On-Device Track Anything Model"β848Updated last month
- AI assistant that can query visual datasets, search the FiftyOne docs, and answer general computer vision questionsβ250Updated last year
- Segment anything ui for annotations written in PySide6. Inspired by Meta demo web page.β16Updated 10 months ago
- webcamGPT - chat with video stream π¬ + πΈβ268Updated last year
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"β492Updated 9 months ago
- An integration of Segment Anything Model, Molmo, and, Whisper to segment objects using voice and natural language.β30Updated 10 months ago
- [NeurIPS 2024] SlimSAM: 0.1% Data Makes Segment Anything Slimβ352Updated 3 months ago
- β66Updated 9 months ago
- [IROS24] Offical Code for "FruitNeRF: A Unified Neural Radiance Field based Fruit Counting Framework" - Inegrated into Nerfstudioβ321Updated 6 months ago
- GroundedSAM Base Model plugin for Autodistillβ54Updated last year
- 2nd place solution for the Generative Interior Design 2024 competitionβ126Updated last year
- βYOLOLite β lightweight YOLO in PyTorch. ONNX export + CPU inference (Raspberry Pi friendly).ββ57Updated 2 weeks ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained modeβ¦β89Updated last year
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectioβ¦β85Updated last year