XIAO4579 / Vlm-interpretabilityLinks
Official implementation for the paper"Towards Understanding How Knowledge Evolves in Large Vision-Language Models"
☆14Updated last month
Alternatives and similar repositories for Vlm-interpretability
Users that are interested in Vlm-interpretability are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆30Updated 3 months ago
- cliptrase☆36Updated 9 months ago
- [CVPR 2025] Few-shot Recognition via Stage-Wise Retrieval-Augmented Finetuning☆18Updated 2 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆65Updated last month
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 2 months ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆34Updated 10 months ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆48Updated 10 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆43Updated 4 months ago
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆43Updated last month
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆174Updated last week
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆88Updated 7 months ago
- ☆84Updated 2 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆58Updated 2 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆28Updated last year
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆91Updated last week
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆53Updated 7 months ago
- Code for CVPR2025 "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: Parameter-Efficient a…☆42Updated 2 weeks ago
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆18Updated 3 months ago
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆32Updated last month
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆34Updated last year
- ☆22Updated last year
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆66Updated last month
- [NeurIPS 2024] Code for Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models☆41Updated 2 months ago
- Detail-Oriented CLIP for Fine-Grained Tasks (ICLR SSI-FM 2025)☆49Updated 2 months ago
- FreeDA: Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation (CVPR 2024)☆45Updated 9 months ago
- ☆12Updated this week
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆80Updated 10 months ago
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆40Updated 11 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆64Updated this week