sovit-123 / SAM_Molmo_WhisperLinks
An integration of Segment Anything Model, Molmo, and, Whisper to segment objects using voice and natural language.
β29Updated 6 months ago
Alternatives and similar repositories for SAM_Molmo_Whisper
Users that are interested in SAM_Molmo_Whisper are comparing it to the libraries listed below
Sorting:
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.β67Updated last year
- Inference and fine-tuning examples for vision models from π€ Transformersβ161Updated last month
- EdgeSAM model for use with Autodistill.β29Updated last year
- Eye explorationβ28Updated 7 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.β128Updated last year
- This Repository demostrates various examples using YOLOβ13Updated last year
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.β66Updated last year
- Flask-based web application designed to compare text and image embeddings using the CLIP model.β22Updated last year
- Take your LLM to the optometrist.β39Updated last month
- An SDK for Transformers + YOLO and other SSD family modelsβ63Updated 7 months ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, modelβ¦β36Updated last year
- Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.β87Updated this week
- Real-time object detection using Florence-2 with a user-friendly GUI.β30Updated last month
- This repo is a packaged version of the Yolov9 model.β89Updated 2 weeks ago
- Creation of annotated datasets from scratch using Generative AI and Foundation Computer Vision modelsβ124Updated 2 weeks ago
- Using the moondream VLM with optical flow for promptable object trackingβ71Updated 6 months ago
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.β31Updated last year
- Chat with Phi 3.5/3 Vision LLMs. Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which includβ¦β34Updated 8 months ago
- Ultralytics Notebooks πβ105Updated 3 weeks ago
- YOLOExplorer : Iterate on your YOLO / CV datasets using SQL, Vector semantic search, and more within secondsβ135Updated 2 weeks ago
- VLM driven tool that processes surveillance videos, extracts frames, and generates insightful annotations using a fine-tuned Florence-2 Vβ¦β124Updated 3 months ago
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectioβ¦β84Updated last year
- Automatic Thief Detection via CCTV with Alarm System and Perpetrator Image Capture using YOLOv5 + ROI. This project utilizes computer visβ¦β13Updated 10 months ago
- Unofficial implementation and experiments related to Set-of-Mark (SoM) ποΈβ88Updated last year
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained modeβ¦β13Updated last year
- A simple demo for utilizing grounding dino and segment anything v2 models togetherβ20Updated last year
- Notebooks using the Neural Magic libraries πβ39Updated last year
- Streamlit app presented to the Streamlit LLMs Hackathon September 23β16Updated last year
- Segment anything ui for annotations written in PySide6. Inspired by Meta demo web page.β14Updated 6 months ago
- 100 Days of GPU Challengeβ22Updated 2 weeks ago