ModelOriented / MAIRLinks
Monitoring of AI Regulations
☆19Updated 4 years ago
Alternatives and similar repositories for MAIR
Users that are interested in MAIR are comparing it to the libraries listed below
Sorting:
- ☆29Updated last year
- Flexible tool for bias detection, visualization, and mitigation☆85Updated 5 months ago
- Python implementation of R package breakDown☆43Updated 2 years ago
- XAI Stories. Case studies for eXplainable Artificial Intelligence☆29Updated 4 years ago
- Practical ideas on securing machine learning models☆36Updated 4 years ago
- Model verification, validation, and error analysis☆58Updated last year
- Interactive XAI dashboard☆22Updated last year
- Causal Inference Using Quasi-Experimental Methods☆20Updated 4 years ago
- Documentation for the DALEX project☆36Updated last year
- Predict whether a student will correctly answer a problem based on past performance using automated feature engineering☆32Updated 4 years ago
- Break Down with interactions for local explanations (SHAP, BreakDown, iBreakDown)☆84Updated last year
- Multi-Calibration & Multi-Accuracy Boosting for R☆32Updated 9 months ago
- Active Learning in R☆47Updated 8 years ago
- Model-agnostic Statistical/Machine Learning explainability (currently Python) for tabular data☆9Updated 3 months ago
- Set of tools to support results from post hoc testing☆24Updated 5 years ago
- Python library for Ceteris Paribus Plots (What-if plots)☆25Updated 4 years ago
- Reading history for Fair ML Reading Group in Melbourne☆36Updated 3 years ago
- Privacy preserving synthetic data generation workflows☆20Updated 3 years ago
- Materiały z seminariów prowadzonych w MI^2 DataLabie.☆33Updated 2 months ago
- This package is provides a mixture model based approach for deep learning.☆9Updated 2 years ago
- Implements the model described in "Identification, Interpretability, and Bayesian Word Embeddings"☆19Updated 6 years ago
- R code for reading and writing files in libsvm format☆14Updated 10 years ago
- Learning Discrete Bayesian Network Classifiers from Data☆20Updated last year
- Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition☆16Updated 5 years ago
- Nested cross-validation for accurate confidence intervals for prediction error.☆41Updated 3 years ago
- Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.☆21Updated 3 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 6 years ago
- Tools for Measuring Classification Performance for R, Python and Spark☆13Updated 7 years ago
- Performs unique entity estimation corresponding to Chen, Shrivastava, Steorts (2018).☆14Updated 6 years ago
- Most recent/important talks given at conferences/meetups☆14Updated 4 years ago