jianzongwu / Awesome-Open-VocabularyLinks
(TPAMI 2024) A Survey on Open Vocabulary Learning
☆938Updated 3 months ago
Alternatives and similar repositories for Awesome-Open-Vocabulary
Users that are interested in Awesome-Open-Vocabulary are comparing it to the libraries listed below
Sorting:
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆668Updated 3 months ago
- A curated list of papers, datasets and resources pertaining to open vocabulary object detection.☆335Updated 2 months ago
- [T-PAMI-2024] Transformer-Based Visual Segmentation: A Survey☆745Updated 10 months ago
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆719Updated last year
- Open-vocabulary Semantic Segmentation☆349Updated 9 months ago
- Awesome OVD-OVS - A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future☆187Updated 3 months ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,296Updated 5 months ago
- This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detectio…☆638Updated last year
- ☆522Updated 8 months ago
- A curated list of papers and resources related to Described Object Detection, Open-Vocabulary/Open-World Object Detection and Referring E…☆296Updated last week
- ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in co…☆953Updated 10 months ago
- Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts☆1,214Updated 6 months ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆430Updated 4 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆831Updated last month
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆979Updated 11 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆574Updated last year
- A collection of papers about Referring Image Segmentation.☆732Updated last week
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆452Updated 2 years ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆893Updated last month
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆481Updated last year
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆611Updated 2 weeks ago
- This repository is for the first comprehensive survey on Meta AI's Segment Anything Model (SAM).☆977Updated this week
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆868Updated 4 months ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,384Updated last month
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆472Updated 3 months ago
- Code release for our CVPR 2023 paper "Detecting Everything in the Open World: Towards Universal Object Detection".☆577Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆774Updated last year
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆727Updated last year
- [ECCV 2024] Tokenize Anything via Prompting☆585Updated 7 months ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,135Updated last year