KishoreP1 / DetailCLIPLinks
Detail-Oriented CLIP for Fine-Grained Tasks (ICLR SSI-FM 2025)
โ52Updated 3 months ago
Alternatives and similar repositories for DetailCLIP
Users that are interested in DetailCLIP are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representationsโ86Updated 3 weeks ago
- [CVPR 2025 ๐ฅ]A Large Multimodal Model for Pixel-Level Visual Grounding in Videosโ74Updated 3 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Modelsโ97Updated last month
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understandingโ43Updated 6 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captionsโ134Updated 2 months ago
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentationโ48Updated this week
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"โ46Updated last month
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalizationโ106Updated last year
- โ58Updated 11 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.โ28Updated last year
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inferenceโ157Updated 9 months ago
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentationโ95Updated 3 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.โ83Updated 11 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"โ35Updated last year
- โ44Updated 4 months ago
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Trainingโ98Updated last year
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023โ53Updated last year
- cliptraseโ38Updated 10 months ago
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Groundingโ17Updated 2 weeks ago
- A curated list of publications on image and video segmentation leveraging Multimodal Large Language Models (MLLMs), highlighting state-ofโฆโ105Updated this week
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Modelsโ76Updated last year
- [ICLR2025] Text4Seg: Reimagining Image Segmentation as Text Generationโ105Updated 3 months ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)โ73Updated 2 years ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"โ109Updated this week
- FreeDA: Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation (CVPR 2024)โ44Updated 10 months ago
- [CVPR2024] GSVA: Generalized Segmentation via Multimodal Large Language Modelsโ137Updated 10 months ago
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inferenceโ84Updated 3 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervisionโ41Updated 3 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Contextโ163Updated 9 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalizationโ64Updated 3 weeks ago