jim-berend / semanticlensLinks
Mechanistic understanding and validation of large AI models with SemanticLens
☆37Updated 3 weeks ago
Alternatives and similar repositories for semanticlens
Users that are interested in semanticlens are comparing it to the libraries listed below
Sorting:
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆133Updated last year
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆192Updated 3 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆39Updated last year
- 👋 Overcomplete is a Vision-based SAE Toolbox☆90Updated 2 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆53Updated 2 months ago
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" 🐍☆45Updated 11 months ago
- LENS Project☆50Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆233Updated 2 months ago
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.☆35Updated 4 months ago
- Dataset and code for the CLEVR-XAI dataset.☆32Updated 2 years ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆67Updated 2 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆70Updated this week
- Codebase for information theoretic shapley values to explain predictive uncertainty.This repo contains the code related to the paperWatso…☆21Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆96Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- ☆18Updated 2 years ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Updated last year
- PyTorch Explain: Interpretable Deep Learning in Python.☆163Updated last year
- 👋 Aligning Human & Machine Vision using explainability☆52Updated 2 years ago
- ☆32Updated 10 months ago
- Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation☆65Updated 3 years ago
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers, Paper accepted at eXCV workshop of ECCV 2…☆29Updated 9 months ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆49Updated 11 months ago
- Code for verifying deep neural feature ansatz☆20Updated 2 years ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆84Updated last year
- Official implementation of the paper "RelP: Faithful and Efficient Circuit Discovery via Relevance Patching"☆15Updated last month
- Large-scale uncertainty benchmark in deep learning.☆63Updated 5 months ago
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆18Updated 3 months ago
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆113Updated last year