All about explainable AI, algorithmic fairness and more
☆110Sep 24, 2023Updated 2 years ago
Alternatives and similar repositories for xaience
Users that are interested in xaience are comparing it to the libraries listed below
Sorting:
- FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)☆72Oct 20, 2021Updated 4 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆29Jul 13, 2019Updated 6 years ago
- code for model-targeted poisoning☆12Oct 3, 2023Updated 2 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆850May 31, 2022Updated 3 years ago
- Guidelines for the responsible use of explainable AI and machine learning.☆17Jan 30, 2023Updated 3 years ago
- ☆916Mar 19, 2023Updated 2 years ago
- Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.☆22Nov 19, 2019Updated 6 years ago
- Code for "Interpretable Image Recognition with Hierarchical Prototypes"☆19Nov 1, 2019Updated 6 years ago
- H2O.ai Machine Learning Interpretability Resources☆491Dec 12, 2020Updated 5 years ago
- Implementation of voronoi diagram with incremental algorithm☆13Jun 10, 2020Updated 5 years ago
- A logical, reasonably standardized, but flexible project structure for conducting ml research 🍪☆18Jan 23, 2026Updated last month
- A Python library to perform NER on structured data and generate PII with Faker☆30May 31, 2024Updated last year
- Temporary Discriminator GAN☆14Jul 21, 2020Updated 5 years ago
- Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, …☆679Jun 17, 2024Updated last year
- ☆17Mar 18, 2018Updated 7 years ago
- Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world d…☆441Feb 7, 2025Updated last year
- A lightweight implementation of removal-based explanations for ML models.☆59Jul 19, 2021Updated 4 years ago
- 👋 Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)☆33Jul 18, 2022Updated 3 years ago
- Unrestricted adversarial images via interpretable color transformations (TIFS 2023 & BMVC 2020)☆32Apr 25, 2023Updated 2 years ago
- ☆17Jan 10, 2024Updated 2 years ago
- TorchEsegeta: Interpretability and Explainability pipeline for PyTorch☆20Feb 19, 2024Updated 2 years ago
- A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoi…☆70Feb 8, 2021Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Jul 15, 2020Updated 5 years ago
- ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any…☆102Aug 31, 2022Updated 3 years ago
- Reading history for Fair ML Reading Group in Melbourne☆36Aug 2, 2021Updated 4 years ago
- ☆32Sep 2, 2024Updated last year
- FAERS Adverse Reaction Events database and OpenFDA API☆17Feb 19, 2020Updated 6 years ago
- Examples of unfairness detection for a classification-based credit model☆20Jun 11, 2019Updated 6 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Jan 31, 2023Updated 3 years ago
- Generate Diverse Counterfactual Explanations for any machine learning model.☆1,500Jul 13, 2025Updated 7 months ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆20Aug 15, 2022Updated 3 years ago
- This codebase is a starting point to get your Machine Learning project into Production.☆43Nov 25, 2020Updated 5 years ago
- ☆22Sep 17, 2024Updated last year
- GEBI: Global Explanations for Bias Identification. Open source code for discovering bias in data with skin lesion dataset☆18Feb 20, 2022Updated 4 years ago
- ☆20Aug 26, 2018Updated 7 years ago
- Code for the paper "(De)Randomized Smoothing for Certifiable Defense against Patch Attacks" by Alexander Levine and Soheil Feizi.☆17Aug 22, 2022Updated 3 years ago
- A library that implements fairness-aware machine learning algorithms☆126Oct 21, 2020Updated 5 years ago
- Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.☆47Oct 3, 2023Updated 2 years ago
- "Oblique Decision Trees from Derivatives of ReLU Networks" (ICLR 2020, previously called "Locally Constant Networks")☆22Apr 27, 2021Updated 4 years ago