liunian-harold-li / DesCoLinks
☆30Updated last year
Alternatives and similar repositories for DesCo
Users that are interested in DesCo are comparing it to the libraries listed below
Sorting:
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- ☆83Updated 3 years ago
- A Python toolkit for the OmniLabel benchmark providing code for evaluation and visualization☆21Updated 4 months ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆58Updated 4 months ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 2 months ago
- SeqTR: A Simple yet Universal Network for Visual Grounding☆137Updated 7 months ago
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆87Updated last year
- A pytorch Implementation of Open Vocabulary Object Detection with Pseudo Bounding-Box Labels☆60Updated 2 years ago
- Official Implementation for paper "Referring Transformer: A One-step Approach to Multi-task Visual Grounding" Neurips 2021☆67Updated 3 years ago
- A detection/segmentation dataset with labels characterized by intricate and flexible expressions. "Described Object Detection: Liberating…☆123Updated last year
- ☆64Updated last year
- PyTorch implementation of ICML 2023 paper "SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation"☆92Updated last year
- (ICCV 2023) Betrayed by Captions: Joint Caption Grounding and Generation for Open Vocabulary Instance Segmentation☆47Updated 10 months ago
- ☆91Updated last year
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆152Updated last year
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆101Updated last year
- [CVPR 2022] Pseudo-Q: Generating Pseudo Language Queries for Visual Grounding☆149Updated 10 months ago
- ☆35Updated last year
- Distilling Large Vision-Language Model with Out-of-Distribution Generalizability (ICCV 2023)☆58Updated last year
- This is the official repository for the paper "Visually-Prompted Language Model for Fine-Grained Scene Graph Generation in an Open World"…☆47Updated last year
- Official Pytorch codebase for Open-Vocabulary Instance Segmentation without Manual Mask Annotations [CVPR 2023]☆50Updated 5 months ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.☆30Updated last year
- A lightweight codebase for referring expression comprehension and segmentation☆55Updated 3 years ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆75Updated 7 months ago
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆164Updated last year
- NegCLIP.☆32Updated 2 years ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆173Updated last year