jim-berend / semanticlensLinks
Mechanistic understanding and validation of large AI models with SemanticLens
☆50Updated last month
Alternatives and similar repositories for semanticlens
Users that are interested in semanticlens are comparing it to the libraries listed below
Sorting:
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆218Updated 6 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆140Updated 2 weeks ago
- 👋 Overcomplete is a Vision-based SAE Toolbox☆117Updated last month
- A toolkit for quantitative evaluation of data attribution methods.☆54Updated 6 months ago
- [NeurIPS 2025 MechInterp Workshop - Spotlight] Official implementation of the paper "RelP: Faithful and Efficient Circuit Discovery in La…☆24Updated 2 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆40Updated last year
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" 🐍☆45Updated last year
- ☆33Updated last year
- LENS Project☆52Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models", our NeurIPS 2023 paper "Learning to Receive Help", and our ICML 2025 pa…☆72Updated this week
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆239Updated 5 months ago
- Dataset and code for the CLEVR-XAI dataset.☆33Updated 2 years ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated 2 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆168Updated last year
- ☆27Updated 2 weeks ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Works…☆19Updated last year
- ☆57Updated last year
- 🪄 Interpreto is an interpretability toolbox for LLMs☆124Updated last week
- Large-scale uncertainty benchmark in deep learning.☆64Updated 8 months ago
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆19Updated this week
- Code for verifying deep neural feature ansatz☆21Updated 2 years ago
- A collection of resources and information for concrete skills that are helpful when pursuing a PhD in computer science (specifically in M…☆22Updated 2 years ago
- ☆41Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Updated last year
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆71Updated 2 years ago
- Library that provides metrics to assess representation quality☆20Updated 11 months ago
- Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation☆67Updated 3 years ago
- ☆25Updated 9 months ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆89Updated last year
- Resources for Machine Learning Explainability☆87Updated last year