ruthcfong / perturb_explanationsView external linksLinks
Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"
☆32Sep 25, 2019Updated 6 years ago
Alternatives and similar repositories for perturb_explanations
Users that are interested in perturb_explanations are comparing it to the libraries listed below
Sorting:
- ☆51Aug 29, 2020Updated 5 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆338Nov 30, 2021Updated 4 years ago
- Code for Sufficient Input Subsets Paper☆14Mar 8, 2019Updated 6 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Feb 23, 2022Updated 3 years ago
- tensorflow implementation of Generating Images with Perceptual Similarity Metrics based on Deep Networks☆21Sep 5, 2017Updated 8 years ago
- ☆14Jun 22, 2020Updated 5 years ago
- A public repository for corrupt0 datathon's court data☆11Jul 6, 2019Updated 6 years ago
- Scrape, clean and explore ThaiME dataset☆12Jul 29, 2020Updated 5 years ago
- Thai PDPA Website (Unofficial)☆11Jun 10, 2023Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Mar 25, 2022Updated 3 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Apr 23, 2022Updated 3 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Feb 8, 2018Updated 8 years ago
- Run Pytorch graphs inside Theano graph (and pytorch wrapper for AIS for generative models).☆18Oct 19, 2017Updated 8 years ago
- A catalog of Jupyter Notebooks presenting new techniques to interpret black box machine learning models.☆15Nov 14, 2018Updated 7 years ago
- Network Dissection http://netdissect.csail.mit.edu for quantifying interpretability of deep CNNs.☆453Aug 25, 2018Updated 7 years ago
- Code for "Interpretable Image Recognition with Hierarchical Prototypes"☆19Nov 1, 2019Updated 6 years ago
- Image annotation UIs (good for AMT tasks)☆20Oct 12, 2015Updated 10 years ago
- Code to replicate "Generating Visual Explanations"☆48Nov 1, 2020Updated 5 years ago
- [adversarial] examples and training cost☆19Jun 29, 2016Updated 9 years ago
- Pytorch implementation of Real Time Image Saliency for Black Box Classifiers https://arxiv.org/abs/1705.07857☆59Oct 15, 2019Updated 6 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Jul 4, 2018Updated 7 years ago
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆29Jul 21, 2025Updated 6 months ago
- Data Driven Framework for Distributed Computing in Torch 7☆24Sep 10, 2016Updated 9 years ago
- ☆21May 24, 2016Updated 9 years ago
- English-Thai Machine Translation Models☆29May 3, 2024Updated last year
- ☆113Nov 21, 2022Updated 3 years ago
- 2020秋季可视化兴趣小组☆10Mar 22, 2021Updated 4 years ago
- Code for the TCAV ML interpretability project☆650Feb 5, 2026Updated last week
- Charades Object Detection Dataset (ICCV 2017)☆31May 30, 2018Updated 7 years ago
- ☆72May 12, 2020Updated 5 years ago
- Hierarchical Deep CNN for image recognition and labeling☆31Oct 3, 2018Updated 7 years ago
- There and Back Again: Revisiting Backpropagation Saliency Methods (CVPR 2020)☆53Apr 7, 2020Updated 5 years ago
- Code for human intervention reinforcement learning☆35Jan 8, 2018Updated 8 years ago
- Programming Assignment Project for Information Visualization Course on University of Chinese Academy of Sciences☆12Mar 10, 2017Updated 8 years ago
- julyedu python线上基础☆12Jul 27, 2017Updated 8 years ago
- Prediction Explanations Clustering☆10Oct 19, 2023Updated 2 years ago
- MATLAB wrapper to the QPBO algorithm by V. Kolmogorov☆11Jun 12, 2015Updated 10 years ago
- MetaC provides a read-eval-print loop (a REPL) and notebook interactive development environment (a NIDE) for C programming. MetaC also …☆12Updated this week
- Light version of Network Dissection for Quantifying Interpretability of Networks☆221May 6, 2019Updated 6 years ago