Raphoo / DCSM_Ideal_CLIPLinks
Code for "Is CLIP ideal? No. Can we fix it? Yes!"
☆39Updated 2 months ago
Alternatives and similar repositories for DCSM_Ideal_CLIP
Users that are interested in DCSM_Ideal_CLIP are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated last year
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examples☆38Updated 11 months ago
- Code implementation of our ICCV 2025 paper: On Large Multimodal Models as Open-World Image Classifiers☆24Updated this week
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆33Updated last year
- ☆23Updated 2 years ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆45Updated 11 months ago
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated last year
- Test-Time Training on Video Streams☆64Updated 2 years ago
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 3 weeks ago
- Distilling Large Vision-Language Model with Out-of-Distribution Generalizability (ICCV 2023)☆59Updated last year
- ✨A curated list of papers on the uncertainty in multi-modal large language model (MLLM).☆54Updated 7 months ago
- This repository houses the code for the paper - "The Neglected of VLMs"☆29Updated 6 months ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆93Updated last year
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆105Updated 5 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆53Updated 7 months ago
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆26Updated 10 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- Visual self-questioning for large vision-language assistant.☆45Updated 3 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Updated 7 months ago
- [ECCV'24] Official PyTorch implementation of In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation☆47Updated last year
- Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models [CVPR 2025]☆75Updated 4 months ago
- Dettoolchain: A new prompting paradigm to unleash detection ability of MLLM☆43Updated last year
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆132Updated 7 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆70Updated 2 months ago
- Awesome Vision-Language Compositionality, a comprehensive curation of research papers in literature.☆30Updated 9 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆84Updated last year
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated last year
- [ECCV 2024] OpenPSG: Open-set Panoptic Scene Graph Generation via Large Multimodal Models☆49Updated 10 months ago
- [NeurIPS'24] SpatialEval: a benchmark to evaluate spatial reasoning abilities of MLLMs and LLMs☆55Updated 9 months ago
- (NeurIPS 2023) Open-set visual object query search & localization in long-form videos☆25Updated last year