sovit-123 / SAM_Molmo_WhisperLinks
An integration of Segment Anything Model, Molmo, and, Whisper to segment objects using voice and natural language.
☆27Updated 4 months ago
Alternatives and similar repositories for SAM_Molmo_Whisper
Users that are interested in SAM_Molmo_Whisper are comparing it to the libraries listed below
Sorting:
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆64Updated 11 months ago
- Eye exploration☆28Updated 5 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆125Updated 11 months ago
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆86Updated last year
- EdgeSAM model for use with Autodistill.☆27Updated last year
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆81Updated last year
- VLM driven tool that processes surveillance videos, extracts frames, and generates insightful annotations using a fine-tuned Florence-2 V…☆118Updated last month
- Fine tune Gemma 3 on an object detection task☆69Updated last week
- Take your LLM to the optometrist.☆32Updated last week
- Nassimos07 / Moving-Stopped-Persons-Real-Time-Detection-using-YOLOv8-or-YOLOv10-Roboflow_Supervision☆9Updated 3 months ago
- Inference and fine-tuning examples for vision models from 🤗 Transformers☆154Updated 2 months ago
- Real-time pose estimation pipeline with 🤗 Transformers☆61Updated 5 months ago
- Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.☆79Updated last week
- This Repository demostrates various examples using YOLO☆13Updated last year
- Real-time, YOLO-like object detection using Florence-2 with a user-friendly GUI.☆27Updated 3 months ago
- Using the moondream VLM with optical flow for promptable object tracking☆68Updated 4 months ago
- ☆21Updated 8 months ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- Python scripts performing optical flow estimation using the NeuFlowV2 model in ONNX.☆48Updated 10 months ago
- Solving Computer Vision with AI agents☆33Updated last week
- Empower lerobot with multimodal Llama 3.2!☆58Updated 7 months ago
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆59Updated 2 months ago
- YOLOExplorer : Iterate on your YOLO / CV datasets using SQL, Vector semantic search, and more within seconds☆132Updated last week
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.☆66Updated last year
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆100Updated 6 months ago
- My personal implementation of the model from "Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities", they haven't rel…☆13Updated last year
- An SDK for Transformers + YOLO and other SSD family models☆63Updated 5 months ago
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆31Updated last year
- Notebooks using the Neural Magic libraries 📓☆40Updated 11 months ago
- YOLOv10: Real-Time End-to-End Object Detection☆11Updated last year