AILab-CVC / YOLO-WorldLinks
[CVPR 2024] Real-Time Open-Vocabulary Object Detection
☆6,070Updated 9 months ago
Alternatives and similar repositories for YOLO-World
Users that are interested in YOLO-World are comparing it to the libraries listed below
Sorting:
- EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything☆2,446Updated 11 months ago
- [ECCV2024] API code for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy☆2,605Updated last month
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,379Updated last year
- Implementation of paper - YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information☆9,446Updated last year
- Images to inference with no labeling (use foundation models to train supervised models).☆2,502Updated 6 months ago
- [CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥…☆4,516Updated last week
- [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation☆7,908Updated last year
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,088Updated 3 weeks ago
- Open-source and strong foundation image recognition models.☆3,498Updated 9 months ago
- Effortless data labeling with AI support from Segment Anything and other awesome models.☆7,219Updated this week
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆1,934Updated 5 months ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆17,881Updated 11 months ago
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,782Updated 5 months ago
- This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!☆5,508Updated last year
- Efficient vision foundation models for high-resolution generation and perception.☆3,164Updated 3 months ago
- Segment Anything in High Quality [NeurIPS 2023]☆4,129Updated 2 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,354Updated 7 months ago
- Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything (SAM+SAM2), MobileSAM!!☆2,925Updated 7 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,064Updated 10 months ago
- YOLOv10: Real-Time End-to-End Object Detection [NeurIPS 2024]☆11,118Updated 8 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,751Updated last year
- SAM with text prompt☆2,495Updated 3 months ago
- Fast Segment Anything☆8,171Updated last year
- Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).☆2,296Updated 2 years ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,298Updated 4 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,684Updated last week
- Grounded Language-Image Pre-training☆2,550Updated last year
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,770Updated 8 months ago
- OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.☆3,349Updated last year
- An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary alg…☆3,084Updated last year