OSU-NLP-Group / saevLinks
Sparse autoencoders for vision
☆47Updated last week
Alternatives and similar repositories for saev
Users that are interested in saev are comparing it to the libraries listed below
Sorting:
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆86Updated 4 months ago
- ☆69Updated 11 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆158Updated 3 weeks ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆76Updated 3 weeks ago
- 🔥 [ICLR 2025] Official PyTorch Model "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"☆20Updated 8 months ago
- Official pytorch implementation of "Interpreting the Second-Order Effects of Neurons in CLIP"☆39Updated 11 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆62Updated last year
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆37Updated 4 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 5 months ago
- Matryoshka Multimodal Models☆111Updated 8 months ago
- ☆57Updated 2 months ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆38Updated 11 months ago
- ☆40Updated last year
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆46Updated last year
- ☆48Updated last year
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆91Updated 3 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆147Updated 3 weeks ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆77Updated 10 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆47Updated 5 months ago
- Preference Learning for LLaVA☆51Updated 11 months ago
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆46Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆34Updated last year
- VLM2-Bench [ACL 2025 Main]: A Closer Look at How Well VLMs Implicitly Link Explicit Matching Visual Cues☆42Updated 4 months ago
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆31Updated 11 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆86Updated last year
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Updated 10 months ago
- ☆103Updated 6 months ago