Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
β127Mar 22, 2021Updated 5 years ago
Alternatives and similar repositories for deep-explanation-penalization
Users that are interested in deep-explanation-penalization are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β126Aug 25, 2021Updated 4 years ago
- Demo for method introduced in "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"β55Jul 23, 2020Updated 5 years ago
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"β27Feb 11, 2021Updated 5 years ago
- ViRelAy is a visualization tool for the analysis of data as generated by CoRelAy.β31Aug 6, 2025Updated 8 months ago
- Tools for training explainable models using attribution priors.β125Mar 19, 2021Updated 5 years ago
- AI Agents on DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"β31Mar 24, 2023Updated 3 years ago
- β32Mar 1, 2024Updated 2 years ago
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.β32Jul 21, 2025Updated 8 months ago
- β35Jun 22, 2021Updated 4 years ago
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)β15Feb 24, 2026Updated last month
- Interpretability of Machine Learning-Visualizationsβ13Jul 9, 2018Updated 7 years ago
- β16May 9, 2022Updated 3 years ago
- β13Jan 30, 2021Updated 5 years ago
- Pruning CNN using CNN with toy exampleβ23Jun 21, 2021Updated 4 years ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code for "Generative causal explanations of black-box classifiers"β36Jan 15, 2021Updated 5 years ago
- β13Nov 29, 2021Updated 4 years ago
- Source code for "Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models", ICLR 2020.β29Jun 28, 2020Updated 5 years ago
- β13Jul 6, 2021Updated 4 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"β54Mar 25, 2022Updated 4 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our papβ¦β23Dec 19, 2020Updated 5 years ago
- TorchEsegeta: Interpretability and Explainability pipeline for PyTorchβ20Feb 19, 2024Updated 2 years ago
- Explainable AI in Julia.β116Mar 30, 2026Updated 2 weeks ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also inβ¦β758Aug 25, 2020Updated 5 years ago
- GPUs on demand by Runpod - Special Offer Available β’ AdRun AI, ML, and HPC workloads on powerful cloud GPUsβwithout limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β243Jan 30, 2026Updated 2 months ago
- Code for the paper "Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers" published in ICLR 2019β13Apr 25, 2019Updated 6 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"β21Oct 13, 2020Updated 5 years ago
- PreferenceNet: Encoding Human Preferences in Auction Design With Deep Learningβ17Aug 10, 2021Updated 4 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)β99Mar 1, 2022Updated 4 years ago
- Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.β15Feb 20, 2026Updated last month
- Concept Bottleneck Models, ICML 2020β251Feb 24, 2023Updated 3 years ago
- β14Dec 4, 2023Updated 2 years ago
- Code for the Paper "Restricting the Flow: Information Bottlenecks for Attribution"β79Mar 18, 2020Updated 6 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI β’ AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Implementation of the paper "Shapley Explanation Networks"β88Jan 16, 2021Updated 5 years ago
- Code for the paper Multi-task Causal Learning with Gaussian Processes (https://arxiv.org/pdf/2009.12821.pdf)β13Oct 17, 2020Updated 5 years ago
- β13Aug 14, 2022Updated 3 years ago
- Code-repository for the ICML 2020 paper Fairwashing explanations with off-manifold detergentβ12Dec 18, 2020Updated 5 years ago
- β96Oct 27, 2022Updated 3 years ago
- This repository contains the implementation of Concept Activation Regions, a new framework to explain deep neural networks with human conβ¦β16Oct 7, 2022Updated 3 years ago
- β12Aug 25, 2022Updated 3 years ago