ExplainableML / sae-for-vlmLinks
Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
☆15Updated last month
Alternatives and similar repositories for sae-for-vlm
Users that are interested in sae-for-vlm are comparing it to the libraries listed below
Sorting:
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆14Updated 6 months ago
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆26Updated 3 months ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆21Updated last year
- Official pytorch implementation of "Interpreting the Second-Order Effects of Neurons in CLIP"☆39Updated 6 months ago
- Holistic evaluation of multimodal foundation models☆47Updated 9 months ago
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated 11 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated 2 years ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆30Updated 2 months ago
- Official implementation of Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More☆23Updated 3 months ago
- ☆11Updated this week
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆32Updated 8 months ago
- ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models (ICLR 2024, Official Implementation)☆16Updated last year
- ☆10Updated 7 months ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Updated 9 months ago
- [NeurIPS 2024] RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models☆22Updated 6 months ago
- ☆32Updated last year
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).☆24Updated 4 months ago
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆73Updated last week
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆56Updated 5 months ago
- An official PyTorch implementation for CLIPPR☆29Updated last year
- ☆37Updated 10 months ago
- ☆42Updated 6 months ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- ☆22Updated last year
- Distributed Optimization Infra for learning CLIP models☆26Updated 8 months ago
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated last year
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆12Updated 3 months ago
- Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Expl…☆12Updated 2 months ago
- [CVPR 2024 Highlight] OpenBias: Open-set Bias Detection in Text-to-Image Generative Models☆23Updated 3 months ago