ExplainableML / sae-for-vlm
Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
☆11Updated 2 weeks ago
Alternatives and similar repositories for sae-for-vlm:
Users that are interested in sae-for-vlm are comparing it to the libraries listed below
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆20Updated last year
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆14Updated 5 months ago
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆11Updated 2 months ago
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated 10 months ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆11Updated 8 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated last year
- [NeurIPS 2024] RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models☆20Updated 5 months ago
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆26Updated last month
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆13Updated 3 months ago
- ☆17Updated 9 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆16Updated last year
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆30Updated 7 months ago
- ☆10Updated 6 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆54Updated 4 months ago
- Official pytorch implementation of "Interpreting the Second-Order Effects of Neurons in CLIP"☆39Updated 5 months ago
- ☆22Updated 11 months ago
- Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆15Updated last month
- Official PyTorch implementation for NeurIPS'24 paper "Knowledge Composition using Task Vectors with Learned Anisotropic Scaling"☆19Updated 2 months ago
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆25Updated 2 months ago
- An official PyTorch implementation for CLIPPR☆29Updated last year
- ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models (ICLR 2024, Official Implementation)☆16Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆30Updated 6 months ago
- [ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"☆19Updated last month
- Official repo of Progressive Data Expansion: data, code and evaluation☆28Updated last year
- ☆11Updated last month
- Sapsucker Woods 60 Audiovisual Dataset☆15Updated 2 years ago
- ☆41Updated 5 months ago
- Code for T-MARS data filtering☆35Updated last year
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆16Updated 6 months ago
- ☆22Updated 3 months ago