jphall663 / secure_ML_ideas
Practical ideas on securing machine learning models
☆36Updated 3 years ago
Alternatives and similar repositories for secure_ML_ideas:
Users that are interested in secure_ML_ideas are comparing it to the libraries listed below
- ☆37Updated last week
- Python implementation of R package breakDown☆42Updated last year
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- Paper and talk from KDD 2019 XAI Workshop☆20Updated 4 years ago
- Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.☆22Updated 5 years ago
- this repo might get accepted☆28Updated 4 years ago
- H2OAI Driverless AI Code Samples and Tutorials☆37Updated 6 months ago
- Guidelines for the responsible use of explainable AI and machine learning.☆17Updated 2 years ago
- Content for the Model Interpretability Tutorial at Pycon US 2019☆41Updated 9 months ago
- A machine learning testing framework for sklearn and pandas. The goal is to help folks assess whether things have changed over time.☆102Updated 3 years ago
- Workshop on Target Leakage in Machine Learning I taught at ODSC Europe 2018 (London) and ODSC East 2019, 2020 (Boston)☆37Updated 5 years ago
- Repository for the research and implementation of categorical encoding into a Featuretools-compatible Python library☆51Updated 2 years ago
- FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)☆71Updated 3 years ago
- Distributed, large-scale, benchmarking framework for rigorous assessment of automatic machine learning repositories, projects, and librar…☆30Updated 2 years ago
- Predict whether a student will correctly answer a problem based on past performance using automated feature engineering☆32Updated 4 years ago
- Hypergol is a Data Science/Machine Learning productivity toolkit to accelerate any projects into production with autogenerated code, stan…☆53Updated 2 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago
- Python library for Ceteris Paribus Plots (What-if plots)☆24Updated 4 years ago
- State management framework for Data Science & Analytics☆19Updated 5 years ago
- Notebook demonstrating use of LIME to interpret a model of long-term relationship success☆24Updated 7 years ago
- Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.☆21Updated 2 years ago
- Tutorial for a new versioning Machine Learning pipeline☆80Updated 3 years ago
- Train multi-task image, text, or ensemble (image + text) models☆45Updated last year
- ☆26Updated 4 years ago
- scikit-learn gradient-boosting-model interactions☆25Updated 2 years ago
- Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/☆27Updated 10 months ago
- The fast.ai data ethics course☆16Updated 2 years ago
- General Interpretability Package☆58Updated 2 years ago
- XAI Stories. Case studies for eXplainable Artificial Intelligence☆29Updated 4 years ago
- ☆20Updated 4 years ago