jianzongwu / Awesome-Open-VocabularyLinks
(TPAMI 2024) A Survey on Open Vocabulary Learning
☆969Updated 8 months ago
Alternatives and similar repositories for Awesome-Open-Vocabulary
Users that are interested in Awesome-Open-Vocabulary are comparing it to the libraries listed below
Sorting:
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆785Updated last month
- A curated list of papers, datasets and resources pertaining to open vocabulary object detection.☆385Updated 7 months ago
- [T-PAMI-2024] Transformer-Based Visual Segmentation: A Survey☆758Updated last year
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆741Updated last year
- This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detectio…☆755Updated 4 months ago
- Open-vocabulary Semantic Segmentation☆365Updated last year
- ☆540Updated last year
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆1,026Updated 4 months ago
- Awesome OVD-OVS - A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future☆209Updated 8 months ago
- A curated list of papers and resources related to Described Object Detection, Open-Vocabulary/Open-World Object Detection and Referring E…☆336Updated last month
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆601Updated last year
- A collection of papers about Referring Image Segmentation.☆793Updated last month
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆855Updated 4 months ago
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆466Updated 3 years ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,510Updated 9 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆929Updated 4 months ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆798Updated last year
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆873Updated 9 months ago
- ☆646Updated 2 years ago
- Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts☆1,360Updated last week
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆723Updated last week
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆453Updated 9 months ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,193Updated 2 years ago
- ☆559Updated 3 years ago
- [ICLR'24 & IJCV‘25] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆538Updated last week
- ICCV 2023-2025 Papers: Discover cutting-edge research from ICCV 2023-25, the leading computer vision conference. Stay updated on the late…☆964Updated last month
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆517Updated last year
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆743Updated 2 years ago
- Code release for our CVPR 2023 paper "Detecting Everything in the Open World: Towards Universal Object Detection".☆587Updated 2 years ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,447Updated 6 months ago