autodistill / autodistill-efficient-yolo-worldLinks
EfficientSAM + YOLO World base model for use with Autodistill.
☆10Updated last year
Alternatives and similar repositories for autodistill-efficient-yolo-world
Users that are interested in autodistill-efficient-yolo-world are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆37Updated last year
- Vision-oriented multimodal AI☆49Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆27Updated last year
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆67Updated last year
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆85Updated last year
- ☆35Updated last year
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆62Updated 2 weeks ago
- An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08…☆31Updated 4 months ago
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆27Updated last year
- "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs" 2023☆15Updated 10 months ago
- An open-source implementaion for fine-tuning SmolVLM.☆50Updated 3 weeks ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- EdgeSAM model for use with Autodistill.☆29Updated last year
- EfficientViT is a new family of vision models for efficient high-resolution vision.☆27Updated 2 years ago
- Official Training and Inference Code of Amodal Expander, Proposed in Tracking Any Object Amodally☆19Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆22Updated this week
- 🎮Manipulates mobile phones just like how you would. Official code for "MobA: A Two-Level Agent System for Efficient Mobile Task Automati…☆25Updated 3 weeks ago
- MobileLLM-R1☆28Updated last week
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated 10 months ago
- ☆71Updated 2 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆132Updated last year
- Auto Segmentation label generation with SAM (Segment Anything) + Grounding DINO☆22Updated 7 months ago
- Official PyTorch implementation of "No Time to Waste: Squeeze Time into Channel for Mobile Video Understanding"☆32Updated last year
- Detectron2 Toolbox and Benchmark for V3Det☆18Updated last year
- ☆70Updated last year
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆19Updated 9 months ago
- SAM-CLIP module for use with Autodistill.☆15Updated last year
- VimTS: A Unified Video and Image Text Spotter☆78Updated 11 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 9 months ago