om-ai-lab / GroundVLPLinks
GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection (AAAI 2024)
☆67Updated last year
Alternatives and similar repositories for GroundVLP
Users that are interested in GroundVLP are comparing it to the libraries listed below
Sorting:
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆123Updated 4 months ago
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆91Updated last year
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 4 months ago
- The official implementation of RAR☆88Updated last year
- [ECCV 2024] SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding☆57Updated 7 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆53Updated 6 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆34Updated 5 months ago
- Official Codes for Fine-Grained Visual Prompting, NeurIPS 2023☆52Updated last year
- [AAAI2024] Code Release of CLIM: Contrastive Language-Image Mosaic for Region Representation☆29Updated last year
- This repo holds the official code and data for "Unveiling Parts Beyond Objects: Towards Finer-Granularity Referring Expression Segmentati…☆70Updated last year
- ☆61Updated last month
- [ACM MM 2024] Hierarchical Multimodal Fine-grained Modulation for Visual Grounding.☆50Updated last month
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆126Updated 11 months ago
- ☆84Updated last year
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆172Updated this week
- 【NeurIPS 2024】Dense Connector for MLLMs☆165Updated 7 months ago
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆54Updated 6 months ago
- Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection☆61Updated 3 months ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆101Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆156Updated 8 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆74Updated 4 months ago
- [CVPR 2025] DynRefer: Delving into Region-level Multimodal Tasks via Dynamic Resolution☆50Updated 2 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆90Updated this week
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆139Updated 11 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 2 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2, and G☆212Updated last week
- [NeurIPS 2024] OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling.☆20Updated 3 months ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆141Updated 10 months ago
- The official repo for "Ref-AVS: Refer and Segment Objects in Audio-Visual Scenes", ECCV 2024☆41Updated 5 months ago