ExplainableML / sae-for-vlmLinks
Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
☆24Updated 3 months ago
Alternatives and similar repositories for sae-for-vlm
Users that are interested in sae-for-vlm are comparing it to the libraries listed below
Sorting:
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Updated 8 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 7 months ago
- Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images☆12Updated 2 months ago
- Official pytorch implementation of "Interpreting the Second-Order Effects of Neurons in CLIP"☆40Updated 8 months ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆21Updated last year
- ☆62Updated 9 months ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Updated 11 months ago
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆79Updated 2 months ago
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆32Updated 2 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 9 months ago
- Holistic evaluation of multimodal foundation models☆48Updated 11 months ago
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆56Updated last year
- ☆34Updated last year
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆77Updated 2 months ago
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆14Updated 5 months ago
- Public code repo for EMNLP 2024 Findings paper "MACAROON: Training Vision-Language Models To Be Your Engaged Partners"☆14Updated 10 months ago
- ☆17Updated last year
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆83Updated last month
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆86Updated last year
- FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens☆13Updated 2 months ago
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆29Updated 5 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆30Updated 3 months ago
- Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Expl…☆12Updated 4 months ago
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆53Updated last year
- ☆23Updated 3 months ago
- [Technical Report] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with …☆60Updated 10 months ago
- LCA-on-the-line (ICML 2024 Oral)☆12Updated 5 months ago
- ☆17Updated 8 months ago