jphall663 / secure_ML_ideasLinks
Practical ideas on securing machine learning models
☆36Updated 4 years ago
Alternatives and similar repositories for secure_ML_ideas
Users that are interested in secure_ML_ideas are comparing it to the libraries listed below
Sorting:
- ☆37Updated 3 weeks ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- Python implementation of R package breakDown☆43Updated last year
- Paper and talk from KDD 2019 XAI Workshop☆20Updated 5 years ago
- Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.☆22Updated 5 years ago
- Guidelines for the responsible use of explainable AI and machine learning.☆17Updated 2 years ago
- Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/☆27Updated 11 months ago
- H2OAI Driverless AI Code Samples and Tutorials☆37Updated 7 months ago
- Repository for the research and implementation of categorical encoding into a Featuretools-compatible Python library☆51Updated 2 years ago
- Workshop on Target Leakage in Machine Learning I taught at ODSC Europe 2018 (London) and ODSC East 2019, 2020 (Boston)☆37Updated 5 years ago
- Proposal Documents for Fairlearn☆9Updated 4 years ago
- this repo might get accepted☆28Updated 4 years ago
- Content for the Model Interpretability Tutorial at Pycon US 2019☆41Updated 10 months ago
- Train multi-task image, text, or ensemble (image + text) models☆45Updated last year
- A machine learning testing framework for sklearn and pandas. The goal is to help folks assess whether things have changed over time.☆102Updated 3 years ago
- FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)☆71Updated 3 years ago
- Hypergol is a Data Science/Machine Learning productivity toolkit to accelerate any projects into production with autogenerated code, stan…☆53Updated 2 years ago
- Know your ML Score based on Sculley's paper☆34Updated 6 years ago
- ☆102Updated last year
- Predict whether a student will correctly answer a problem based on past performance using automated feature engineering☆32Updated 4 years ago
- My UC Berkeley Ph.D. dissertation.☆9Updated 3 years ago
- Developmental tools to detect data drift☆16Updated last year
- Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.☆21Updated 2 years ago
- XAI Stories. Case studies for eXplainable Artificial Intelligence☆29Updated 4 years ago
- ☆14Updated 4 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago
- Repo for the ML_Insights python package☆152Updated last month
- Best practices for engineering ML pipelines.☆35Updated 2 years ago
- Notebook demonstrating use of LIME to interpret a model of long-term relationship success☆24Updated 7 years ago
- Surrogate Assisted Feature Extraction☆37Updated 3 years ago