Heidelberg-NLP / CC-SHAP-VLMLinks
Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?"
☆12Updated 3 months ago
Alternatives and similar repositories for CC-SHAP-VLM
Users that are interested in CC-SHAP-VLM are comparing it to the libraries listed below
Sorting:
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆75Updated last month
- Official pytorch implementation of "Interpreting the Second-Order Effects of Neurons in CLIP"☆39Updated 8 months ago
- What do we learn from inverting CLIP models?☆55Updated last year
- Sparse autoencoders for vision☆37Updated 3 weeks ago
- ☆57Updated 8 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 8 months ago
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆31Updated last year
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆54Updated last year
- [CVPR 2025] MicroVQA eval and 🤖RefineBot code for "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research"…☆21Updated last week
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆38Updated 8 months ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆57Updated 2 years ago
- ☆22Updated last year
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Updated 10 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆82Updated last year
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆13Updated 5 months ago
- Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models☆24Updated 3 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆86Updated last year
- (ICML 2023) Discover and Cure: Concept-aware Mitigation of Spurious Correlation☆41Updated last year
- ☆38Updated 11 months ago
- Official Code for ACL 2023 Outstanding Paper: World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Languag…☆32Updated last year
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated 2 years ago
- Official repo of Progressive Data Expansion: data, code and evaluation☆29Updated last year
- code for EMNLP 2024 paper: How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for M…☆13Updated 8 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated 2 years ago
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆53Updated last year
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆32Updated last month
- [ICML 24] A novel automated neuron explanation framework that can accurately describe poly-semantic concepts in deep neural networks☆13Updated 2 months ago
- Code for Debiasing Vision-Language Models via Biased Prompts☆56Updated 2 years ago
- Holistic evaluation of multimodal foundation models☆48Updated 11 months ago