jphall663 / secure_ML_ideas
Practical ideas on securing machine learning models
☆36Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for secure_ML_ideas
- ☆35Updated 9 months ago
- Paper and talk from KDD 2019 XAI Workshop☆20Updated 4 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- Python implementation of R package breakDown☆41Updated last year
- this repo might get accepted☆29Updated 3 years ago
- Guidelines for the responsible use of explainable AI and machine learning.☆17Updated last year
- A machine learning testing framework for sklearn and pandas. The goal is to help folks assess whether things have changed over time.☆101Updated 3 years ago
- ☆20Updated 3 years ago
- Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.☆22Updated 5 years ago
- Content for the Model Interpretability Tutorial at Pycon US 2019☆41Updated 3 months ago
- State management framework for Data Science & Analytics☆19Updated 5 years ago
- Distributed, large-scale, benchmarking framework for rigorous assessment of automatic machine learning repositories, projects, and librar…☆30Updated 2 years ago
- Hypergol is a Data Science/Machine Learning productivity toolkit to accelerate any projects into production with autogenerated code, stan…☆53Updated last year
- Train multi-task image, text, or ensemble (image + text) models☆45Updated last year
- Proposal Documents for Fairlearn☆9Updated 4 years ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated last year
- ☆26Updated 3 years ago
- ☆101Updated last year
- A collection of machine learning model cards and datasheets.☆71Updated 5 months ago
- H2OAI Driverless AI Code Samples and Tutorials☆37Updated 3 weeks ago
- The fast.ai data ethics course☆14Updated last year
- Know your ML Score based on Sculley's paper☆34Updated 5 years ago
- FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)☆70Updated 3 years ago
- Embed categorical variables via neural networks.☆59Updated last year
- Repository for the research and implementation of categorical encoding into a Featuretools-compatible Python library☆50Updated 2 years ago
- Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.☆21Updated 2 years ago
- Tutorial for a new versioning Machine Learning pipeline☆81Updated 3 years ago
- Notebook demonstrating use of LIME to interpret a model of long-term relationship success☆24Updated 7 years ago