da2so / Interpretable-Explanations-of-Black-Boxes-by-Meaningful-PerturbationLinks
Interpretable Explanations of Black Boxes by Meaningful Perturbation Pytorch
☆12Updated 9 months ago
Alternatives and similar repositories for Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation
Users that are interested in Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation are comparing it to the libraries listed below
Sorting:
- Counterfactual Explanation Based on Gradual Construction for Deep Networks Pytorch☆11Updated 4 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 5 years ago
- PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversion☆21Updated 6 years ago
- Self-Distillation with weighted ground-truth targets; ResNet and Kernel Ridge Regression☆18Updated 3 years ago
- ZSKD with PyTorch☆31Updated 2 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- 1'st Place approach for CVPR 2020 Continual Learning Challenge☆46Updated 4 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- ICML'20: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust☆15Updated 4 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆31Updated 5 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- Official Repo for "Efficient task-specific data valuation for nearest neighbor algorithms"☆26Updated 5 years ago
- ☆27Updated 4 years ago
- CVPR 2019 paper "Disentangling Adversarial Robustness and Generalization".☆14Updated 5 years ago
- Fine-grained ImageNet annotations☆29Updated 5 years ago
- Label shift experiments☆17Updated 4 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- ☆44Updated 5 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- A general method for training cost-sensitive robust classifier☆22Updated 6 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 3 years ago
- PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuatio…☆27Updated 3 years ago
- ☆18Updated 3 years ago
- A pytorch compatible data loader to create sequence of tasks for Continual Learning☆33Updated 5 years ago
- [Re] Can gradient clipping mitigate label noise? (ML Reproducibility Challenge 2020)☆14Updated 9 months ago
- Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019☆27Updated 5 years ago
- This repo contains the code used for NeurIPS 2019 paper "Asymmetric Valleys: Beyond Sharp and Flat Local Minima".☆14Updated 5 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago