longzw1997 / Open-GroundingDino
This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.
☆515Updated 7 months ago
Alternatives and similar repositories for Open-GroundingDino:
Users that are interested in Open-GroundingDino are comparing it to the libraries listed below
- Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion☆288Updated last week
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆433Updated 10 months ago
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆682Updated last year
- Code release for our CVPR 2023 paper "Detecting Everything in the Open World: Towards Universal Object Detection".☆559Updated last year
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆889Updated 3 weeks ago
- [ICCV 2023] DETRs with Collaborative Hybrid Assignments Training☆1,111Updated last month
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆881Updated 2 months ago
- A curated list of papers, datasets and resources pertaining to open vocabulary object detection.☆297Updated 7 months ago
- Fine tuning grounding Dino☆84Updated last month
- CoRL 2024☆376Updated 3 months ago
- [CVPR 2023] Official implementation of the paper "Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segme…☆1,256Updated last year
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆507Updated 9 months ago
- Official PyTorch implementation of "Multi-modal Queried Object Detection in the Wild" (accepted by NeurIPS 2023)☆284Updated 11 months ago
- Downstream-Dino-V2: A GitHub repository featuring an easy-to-use implementation of the DINOv2 model by Facebook for downstream tasks such…☆218Updated last year
- Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts☆1,115Updated last month
- Fine-tune SAM (Segment Anything Model) for computer vision tasks such as semantic segmentation, matting, detection ... in specific scena…☆804Updated last year
- [ECCV 2024] Tokenize Anything via Prompting☆559Updated 2 months ago
- Open-vocabulary Semantic Segmentation☆328Updated 3 months ago
- A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.☆341Updated 2 months ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆735Updated 10 months ago
- A curated list of papers and resources related to Described Object Detection, Open-Vocabulary/Open-World Object Detection and Referring E…☆244Updated 3 weeks ago
- [ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆466Updated 2 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆1,650Updated last month
- A central hub for gathering and showcasing amazing projects that extend OpenMMLab with SAM and other exciting features.☆1,162Updated 10 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆855Updated 3 weeks ago
- Fine-tune Segment-Anything Model with Lightning Fabric.☆520Updated 10 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆771Updated 6 months ago
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆708Updated last year
- We developed a python UI based on labelme and segment-anything for pixel-level annotation. It support multiple masks generation by SAM(bo…☆357Updated last year
- This repository is for the first comprehensive survey on Meta AI's Segment Anything Model (SAM).☆885Updated this week