dtak / tree-regularization-publicLinks
Code for AAAI 2018 accepted paper: "Beyond Sparsity: Tree Regularization of Deep Models for Interpretability"
☆78Updated 7 years ago
Alternatives and similar repositories for tree-regularization-public
Users that are interested in tree-regularization-public are comparing it to the libraries listed below
Sorting:
- ☆125Updated 4 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆61Updated 5 years ago
- ☆134Updated 5 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- [Code] Deep Multi-task Representation Learning: A Tensor Factorisation Approach☆58Updated 7 years ago
- I collected some papers about interpretable CNN and reorganized them here.☆131Updated 6 years ago
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- Gold Loss Correction☆87Updated 6 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆75Updated 7 years ago
- Code for "Active One-shot Learning"☆33Updated 6 years ago
- Library to manage machine learning problems as `Tasks' and to sample from Task distributions. Includes Tensorflow implementation of impli…☆48Updated 3 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆131Updated 4 years ago
- Implementation of the paper "Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory", Ron Amit and Ron Meir, ICML 2018☆22Updated 5 years ago
- Explaining a black-box using Deep Variational Information Bottleneck Approach☆46Updated 2 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- Policy based Active Learning with DQN (EMNLP-2017)☆89Updated 7 years ago
- GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model’s Prediction. Thai Le, Suhang Wang, Dongwon …☆21Updated 4 years ago
- Feature Interaction Interpretability via Interaction Detection☆34Updated last year
- Code release for "Learning Multiple Tasks with Multilinear Relationship Networks" (NIPS 2017)☆70Updated 7 years ago
- On the decision boundary of deep neural networks☆38Updated 6 years ago
- Supervised Local Modeling for Interpretability☆29Updated 6 years ago
- Open category classification by adversarial sample generation☆20Updated 4 years ago
- Code for paper EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE☆40Updated last year
- A PyTorch implementation of the blocks from the _A Simple Neural Attentive Meta-Learner_ paper☆98Updated 7 years ago
- Implementation of Conditionally Shifted Neurons by Munkhdalai et al. (https://arxiv.org/pdf/1712.09926.pdf)☆28Updated 6 years ago
- ☆100Updated 7 years ago
- Certifying Some Distributional Robustness with Principled Adversarial Training (https://arxiv.org/abs/1710.10571)☆45Updated 7 years ago
- Code for paper "Dimensionality-Driven Learning with Noisy Labels" - ICML 2018☆58Updated 11 months ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- An official PyTorch implementation of “Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation” (NeurIPS 2019) by Risto Vuorio*…☆139Updated 5 years ago