autodistill / autodistill-efficient-yolo-worldLinks
EfficientSAM + YOLO World base model for use with Autodistill.
☆10Updated last year
Alternatives and similar repositories for autodistill-efficient-yolo-world
Users that are interested in autodistill-efficient-yolo-world are comparing it to the libraries listed below
Sorting:
- Vision-oriented multimodal AI☆49Updated last year
- EfficientViT is a new family of vision models for efficient high-resolution vision.☆26Updated last year
- Official Pytorch Implementation of Self-emerging Token Labeling☆34Updated last year
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Updated 7 months ago
- "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs" 2023☆14Updated 6 months ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- Stable Diffusion in TensorRT 8.5+☆14Updated 2 years ago
- A simple demo for utilizing grounding dino and segment anything v2 models together☆20Updated 10 months ago
- ☆32Updated 5 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- ☆34Updated last year
- SAM-CLIP module for use with Autodistill.☆15Updated last year
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated 7 months ago
- EdgeSAM model for use with Autodistill.☆27Updated last year
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆10Updated last year
- Official Training and Inference Code of Amodal Expander, Proposed in Tracking Any Object Amodally☆18Updated 11 months ago
- Modern Stable Diffusion models family - Fluently☆32Updated last year
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 4 months ago
- An interactive demo based on Segment-Anything for stroke-based painting which enables human-like painting.☆35Updated 2 years ago
- ☆24Updated last year
- An open-source implementaion for fine-tuning SmolVLM.☆40Updated last month
- Various test models in WNNX format. It can view with `pip install wnetron && wnetron`☆12Updated 3 years ago
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆20Updated 2 months ago
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆64Updated 10 months ago
- ☆14Updated 2 years ago
- Auto Segmentation label generation with SAM (Segment Anything) + Grounding DINO☆19Updated 4 months ago
- Codebase for the Recognize Anything Model (RAM)☆80Updated last year
- This repository is for the first survey on SAM & SAM2 for Videos.☆51Updated last month
- ☆27Updated 7 months ago
- The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"☆29Updated 2 months ago