laura-rieger / deep-explanation-penalizationView external linksLinks
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
☆128Mar 22, 2021Updated 4 years ago
Alternatives and similar repositories for deep-explanation-penalization
Users that are interested in deep-explanation-penalization are comparing it to the libraries listed below
Sorting:
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Aug 25, 2021Updated 4 years ago
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"☆28Feb 11, 2021Updated 5 years ago
- Demo for method introduced in "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"☆56Jul 23, 2020Updated 5 years ago
- ViRelAy is a visualization tool for the analysis of data as generated by CoRelAy.☆29Aug 6, 2025Updated 6 months ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆31Mar 24, 2023Updated 2 years ago
- Tools for training explainable models using attribution priors.☆125Mar 19, 2021Updated 4 years ago
- Code for "Generative causal explanations of black-box classifiers"☆35Jan 15, 2021Updated 5 years ago
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆29Jul 21, 2025Updated 6 months ago
- ☆15Jan 30, 2021Updated 5 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Mar 25, 2022Updated 3 years ago
- TorchEsegeta: Interpretability and Explainability pipeline for PyTorch☆20Feb 19, 2024Updated last year
- ☆16May 9, 2022Updated 3 years ago
- Functions for easily making publication-quality figures with matplotlib.☆19Jan 20, 2024Updated 2 years ago
- Code release for "Making a Bird AI Expert Work for You and Me (TPAMI 2023)".☆16May 4, 2023Updated 2 years ago
- ☆14Jul 6, 2021Updated 4 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97May 25, 2023Updated 2 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Mar 1, 2022Updated 3 years ago
- The stand-alone training engine module for the ALOHA.eu project.☆15Oct 27, 2019Updated 6 years ago
- ☆38Oct 3, 2023Updated 2 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆77Nov 21, 2017Updated 8 years ago
- Pytorch implementation of regularization methods for deep networks obtained via kernel methods.☆23Dec 27, 2019Updated 6 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆348Jul 22, 2020Updated 5 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Jul 4, 2018Updated 7 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆22Oct 13, 2020Updated 5 years ago
- A library to instantiate any Python object from configuration files.☆24Oct 12, 2022Updated 3 years ago
- SODEN: A Scalable Continuous-Time Survival Model through Ordinary Differential Equation Networks☆14Mar 2, 2023Updated 2 years ago
- Local explanations with uncertainty 💐!☆42Aug 8, 2023Updated 2 years ago
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆43Nov 10, 2022Updated 3 years ago
- ☆38Jun 10, 2021Updated 4 years ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆389May 11, 2022Updated 3 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆760Aug 25, 2020Updated 5 years ago
- Pushing the limits of capsule networks.☆12Dec 7, 2022Updated 3 years ago
- ☆12Oct 5, 2020Updated 5 years ago
- Implementation of the spotlight: a method for discovering systematic errors in deep learning models☆11Oct 5, 2021Updated 4 years ago
- ☆12Aug 25, 2022Updated 3 years ago
- ☆11Jan 9, 2019Updated 7 years ago
- ☆13Aug 14, 2022Updated 3 years ago
- ☆13Jul 26, 2023Updated 2 years ago
- ☆14Feb 10, 2023Updated 3 years ago