autodistill / autodistill-efficient-yolo-worldLinks
EfficientSAM + YOLO World base model for use with Autodistill.
☆10Updated last year
Alternatives and similar repositories for autodistill-efficient-yolo-world
Users that are interested in autodistill-efficient-yolo-world are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- Vision-oriented multimodal AI☆49Updated last year
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆85Updated last year
- Official PyTorch implementation of "No Time to Waste: Squeeze Time into Channel for Mobile Video Understanding"☆31Updated last year
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 5 months ago
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆22Updated last week
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆10Updated last year
- ☆34Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 10 months ago
- EdgeSAM model for use with Autodistill.☆27Updated last year
- EfficientViT is a new family of vision models for efficient high-resolution vision.☆26Updated last year
- Official Training and Inference Code of Amodal Expander, Proposed in Tracking Any Object Amodally☆18Updated last year
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Updated 8 months ago
- This repository is for the first survey on SAM & SAM2 for Videos.☆52Updated 3 months ago
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆28Updated last year
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆94Updated 7 months ago
- ☆69Updated last year
- An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08…☆29Updated 2 months ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆55Updated 3 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆126Updated last year
- Codes for ICML 2023 Learning Dynamic Query Combinations for Transformer-based Object Detection and Segmentation☆37Updated last year
- ☆68Updated 2 weeks ago
- VimTS: A Unified Video and Image Text Spotter☆77Updated 8 months ago
- Pruned CoTracker architecture for tracking the myocardium in 2D echo images.☆15Updated 3 months ago
- 💡💡💡awesome compute vision app in gradio☆53Updated last year
- Codebase for the Recognize Anything Model (RAM)☆82Updated last year
- The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"☆30Updated 2 weeks ago
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆26Updated last year