donato-maragno / robust-CE
Generate robust counterfactual explanations for machine learning models
β14Updated last year
Related projects β
Alternatives and complementary repositories for robust-CE
- bayesian limeβ16Updated 3 months ago
- Local explanations with uncertainty π!β39Updated last year
- A Data-Centric library providing a unified interface for state-of-the-art methods for hardness characterisation of data points.β23Updated this week
- This repository provides details of the experimental code in the paper: Instance-based Counterfactual Explanations for Time Series Classiβ¦β18Updated 3 years ago
- Neural Additive Models (Google Research)β26Updated 6 months ago
- Counterfactual Explanations for Multivariate Time Series Dataβ29Updated 8 months ago
- Code for "Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties"β18Updated 3 years ago
- β14Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ232Updated 3 months ago
- A collection of algorithms of counterfactual explanations.β50Updated 3 years ago
- β16Updated last year
- Neural Additive Models (Google Research)β67Updated 3 years ago
- Dataset repository for the 2024 paper "The Causal Chambers: Real Physical Systems as a Testbed for AI Methodology" by Juan L. Gamella, Joβ¦β24Updated 2 weeks ago
- Official codebase for the paper "Provable concept learning for interpretable predictions using variational inference".β13Updated 2 years ago
- β25Updated last year
- β12Updated 4 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ118Updated 5 months ago
- A toolkit for quantitative evaluation of data attribution methods.β33Updated this week
- β50Updated 3 months ago
- Our maintained PFN repository. Come here to train SOTA PFNs.β51Updated this week
- β22Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ30Updated 7 months ago
- Code for multistep feedback covariate shift conformal prediction experiments in "Conformal Validity Guarantees Exist for Any Data Distribβ¦β25Updated 4 months ago
- Code for "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning"β43Updated 2 years ago
- An amortized approach for calculating local Shapley value explanationsβ92Updated 11 months ago
- Realistic benchmark for different causal inference methods. The realism comes from fitting generative models to data with an assumed causβ¦β68Updated 3 years ago
- A benchmark for distribution shift in tabular dataβ44Updated 5 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ52Updated 2 years ago
- Unified Model Interpretability Library for Time Seriesβ44Updated 10 months ago
- β97Updated 3 years ago