autodistill / autodistill-efficient-yolo-worldLinks
EfficientSAM + YOLO World base model for use with Autodistill.
☆10Updated last year
Alternatives and similar repositories for autodistill-efficient-yolo-world
Users that are interested in autodistill-efficient-yolo-world are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆37Updated 2 years ago
- Vision-oriented multimodal AI☆49Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation, arXiv 2024☆66Updated last month
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆85Updated 2 years ago
- Official PyTorch implementation of "No Time to Waste: Squeeze Time into Channel for Mobile Video Understanding"☆32Updated last year
- An open-source implementaion for fine-tuning SmolVLM.☆59Updated 2 months ago
- ☆35Updated last year
- ☆72Updated 4 months ago
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Updated last year
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Updated last year
- EfficientViT is a new family of vision models for efficient high-resolution vision.☆29Updated 2 years ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆27Updated 2 years ago
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆12Updated last year
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated last year
- VimTS: A Unified Video and Image Text Spotter☆79Updated last year
- The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"☆31Updated last month
- An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08…☆33Updated 6 months ago
- EdgeSAM model for use with Autodistill.☆29Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆25Updated last month
- ☆69Updated last year
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆134Updated last year
- Official Training and Inference Code of Amodal Expander, Proposed in Tracking Any Object Amodally☆19Updated last year
- Implementation of a Hierarchical Mamba as described in the paper: "Hierarchical State Space Models for Continuous Sequence-to-Sequence Mo…☆14Updated last year
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆19Updated 11 months ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆58Updated 7 months ago
- ☆58Updated 2 weeks ago
- Codebase for the Recognize Anything Model (RAM)☆87Updated last year