ExplainableML / sae-for-vlmLinks
Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
☆32Updated 4 months ago
Alternatives and similar repositories for sae-for-vlm
Users that are interested in sae-for-vlm are comparing it to the libraries listed below
Sorting:
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Updated 9 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆87Updated last year
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆80Updated 3 months ago
- Official pytorch implementation of "Interpreting the Second-Order Effects of Neurons in CLIP"☆40Updated 9 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆33Updated 3 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 8 months ago
- ☆67Updated 9 months ago
- Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images☆14Updated 2 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆79Updated last year
- XL-VLMs: General Repository for eXplainable Large Vision Language Models☆29Updated 2 weeks ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 2 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆43Updated 10 months ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆57Updated last year
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆77Updated 3 months ago
- Holistic evaluation of multimodal foundation models☆48Updated last year
- ☆43Updated 9 months ago
- ☆34Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆46Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆33Updated 10 months ago
- Preference Learning for LLaVA☆48Updated 9 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 6 months ago
- ☆96Updated 5 months ago
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated last year
- [ACL 2025 Findings] Official pytorch implementation of "Don't Miss the Forest for the Trees: Attentional Vision Calibration for Large Vis…☆18Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆64Updated 6 months ago
- Official Repo for FoodieQA paper (EMNLP 2024)☆16Updated 2 months ago
- Official code and dataset for our NAACL 2024 paper: DialogCC: An Automated Pipeline for Creating High-Quality Multi-modal Dialogue Datase…☆13Updated last year