CongpeiQiu / CLIPRefinerLinks
[ICLR2025] Code Release of Refining CLlP's Spatial Awareness: A Visual-centric Perspective
β19Updated 6 months ago
Alternatives and similar repositories for CLIPRefiner
Users that are interested in CLIPRefiner are comparing it to the libraries listed below
Sorting:
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.β696Updated 2 months ago
- π This is a repository for organizing papers, codes, and other resources related to unified multimodal models.β324Updated 3 weeks ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"β543Updated 3 months ago
- [AAAI 2025] AL-Ref-SAM 2: Unleashing the Temporal-Spatial Reasoning Capacity of GPT for Training-Free Audio and Language Referenced Videoβ¦β89Updated 10 months ago
- paper list on Video Moment Retrieval (VMR), or Temporal Video Grounding (TVG), Video Grounding (VG), or Temporal Sentence Grounding in Viβ¦β23Updated 2 months ago
- β77Updated last week
- β16Updated 6 months ago
- β16Updated 9 months ago
- a brief repo about paper researchβ15Updated last year
- Official repository for VisionZip (CVPR 2025)β368Updated 3 months ago
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Modelβ194Updated last year
- Survey: https://arxiv.org/pdf/2507.20198β190Updated 2 weeks ago
- [TPAMI 2025] Towards Visual Grounding: A Surveyβ252Updated 2 months ago
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"β335Updated 2 months ago
- Code for Scaling Language-Free Visual Representation Learning (WebSSL)β245Updated 6 months ago
- Official code for CVPR 2024 paper, "SC-Tune: Unleashing Self-Consistent Referential Comprehension in Large Vision Language Models"β16Updated last year
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β730Updated 3 weeks ago
- π₯CVPR 2025 Multimodal Large Language Models Paper Listβ156Updated 7 months ago
- Universal Video Temporal Grounding with Generative Multi-modal Large Language Modelsβ31Updated 3 weeks ago
- CrossLMM: Decoupling Long Video Sequences from LMMs via Dual Cross-Attention Mechanismsβ24Updated 5 months ago
- Collections of Papers and Projects for Multimodal Reasoning.β104Updated 6 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β393Updated 10 months ago
- A curated list of publications on image and video segmentation leveraging Multimodal Large Language Models (MLLMs), highlighting state-ofβ¦β151Updated last week
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..β758Updated 2 weeks ago
- A curated list of papers and resources related to Described Object Detection, Open-Vocabulary/Open-World Object Detection and Referring Eβ¦β327Updated 2 weeks ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'β288Updated 6 months ago
- Awesome OVD-OVS - A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Futureβ203Updated 7 months ago
- β40Updated 7 months ago
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Groundingβ43Updated 2 months ago
- [ICCV 2025] The official pytorch implement of "LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs".β20Updated last week