lorebianchi98 / FG-CLIPLinks
[CBMI2024 Best Paper] Official repository of the paper "Is CLIP the main roadblock for fine-grained open-world perception?".
☆28Updated 4 months ago
Alternatives and similar repositories for FG-CLIP
Users that are interested in FG-CLIP are comparing it to the libraries listed below
Sorting:
- Official implementation of TagAlign☆35Updated 9 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆59Updated last month
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆103Updated 3 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated 10 months ago
- ☆32Updated 11 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆42Updated 9 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆40Updated 5 months ago
- ☆23Updated 2 years ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆48Updated 3 months ago
- ☆119Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆40Updated last year
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated 11 months ago
- [CVPR2024 Highlight] Official repository of the paper "The devil is in the fine-grained details: Evaluating open-vocabulary object detect…☆58Updated 5 months ago
- ☆32Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 4 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆58Updated 10 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 9 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆82Updated 2 months ago
- ☆23Updated last year
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆43Updated last year
- ☆91Updated last year
- Visual self-questioning for large vision-language assistant.☆43Updated last month
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆31Updated 5 months ago
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examples☆30Updated 9 months ago
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆42Updated 3 weeks ago
- ☆58Updated 2 years ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆128Updated last month
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆98Updated 2 years ago
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆81Updated 10 months ago