ExplainableML / sae-for-vlmLinks
Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
☆21Updated 3 months ago
Alternatives and similar repositories for sae-for-vlm
Users that are interested in sae-for-vlm are comparing it to the libraries listed below
Sorting:
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆15Updated 7 months ago
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated last year
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆21Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 7 months ago
- Official pytorch implementation of "Interpreting the Second-Order Effects of Neurons in CLIP"☆39Updated 8 months ago
- ☆17Updated 7 months ago
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆13Updated 5 months ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Updated 10 months ago
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆32Updated last month
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆75Updated last month
- [NeurIPS 2024] RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models☆24Updated 8 months ago
- ☆57Updated 8 months ago
- [CVPRW 2025] Official repository of paper titled "Towards Evaluating the Robustness of Visual State Space Models"☆24Updated last month
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆77Updated last month
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆27Updated 5 months ago
- ☆33Updated last year
- An official PyTorch implementation for CLIPPR☆29Updated last year
- [CVPR 2025] MicroVQA eval and 🤖RefineBot code for "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research"…☆21Updated last week
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆14Updated last month
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated 2 years ago
- Official codebase for the NeurIPS 2023 paper: Towards Last-layer Retraining for Group Robustness with Fewer Annotations. https://arxiv.or…☆11Updated last year
- Holistic evaluation of multimodal foundation models☆48Updated 11 months ago
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆15Updated 4 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 8 months ago
- [ICML 24] A novel automated neuron explanation framework that can accurately describe poly-semantic concepts in deep neural networks☆13Updated 2 months ago
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).☆26Updated 5 months ago
- ☆11Updated 8 months ago
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆36Updated 4 months ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆54Updated last year