Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
☆127Mar 22, 2021Updated 5 years ago
Alternatives and similar repositories for deep-explanation-penalization
Users that are interested in deep-explanation-penalization are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"☆27Feb 11, 2021Updated 5 years ago
- ViRelAy is a visualization tool for the analysis of data as generated by CoRelAy.☆31Apr 30, 2026Updated last week
- Code/figures in Right for the Right Reasons☆57Dec 29, 2020Updated 5 years ago
- ☆32Mar 1, 2024Updated 2 years ago
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆32Apr 30, 2026Updated last week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆35Jun 22, 2021Updated 4 years ago
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)☆15Feb 24, 2026Updated 2 months ago
- ☆16May 9, 2022Updated 4 years ago
- ☆13Jan 30, 2021Updated 5 years ago
- Pruning CNN using CNN with toy example☆23Jun 21, 2021Updated 4 years ago
- Code for "Generative causal explanations of black-box classifiers"☆36Jan 15, 2021Updated 5 years ago
- ☆13Nov 29, 2021Updated 4 years ago
- The stand-alone training engine module for the ALOHA.eu project.☆15Oct 27, 2019Updated 6 years ago
- ☆13Jul 6, 2021Updated 4 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Functions for easily making publication-quality figures with matplotlib.☆19Jan 20, 2024Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Mar 25, 2022Updated 4 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Dec 19, 2020Updated 5 years ago
- List of relevant resources for machine learning from explanatory supervision☆165Jul 14, 2025Updated 9 months ago
- TorchEsegeta: Interpretability and Explainability pipeline for PyTorch☆20Feb 19, 2024Updated 2 years ago
- Explainable AI in Julia.☆116Apr 27, 2026Updated last week
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆349Jul 22, 2020Updated 5 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆758Aug 25, 2020Updated 5 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆243Jan 30, 2026Updated 3 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆21Oct 13, 2020Updated 5 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆99Mar 1, 2022Updated 4 years ago
- A companion repository to the "You Only Write Thrice: Creating Documents, Computational Notebooks and Presentations From a Single Source"…☆20Oct 14, 2022Updated 3 years ago
- Concept Bottleneck Models, ICML 2020☆252Feb 24, 2023Updated 3 years ago
- ☆14Dec 4, 2023Updated 2 years ago
- Code for the Paper "Restricting the Flow: Information Bottlenecks for Attribution"