cmu-sei / juneberryLinks
Juneberry improves the experience of machine learning experimentation by providing a framework for automating the training, evaluation and comparison of multiple models against multiple datasets, reducing errors and improving reproducibility.
☆33Updated 2 years ago
Alternatives and similar repositories for juneberry
Users that are interested in juneberry are comparing it to the libraries listed below
Sorting:
- ARMORY Adversarial Robustness Evaluation Test Bed☆182Updated last year
- ☆127Updated 3 years ago
- Privacy Testing for Deep Learning☆206Updated 2 years ago
- PyTorch-centric library for evaluating and enhancing the robustness of AI technologies☆57Updated last year
- 💡 Adversarial attacks on explanations and how to defend them☆321Updated 8 months ago
- A Python library for Secure and Explainable Machine Learning☆184Updated last month
- PhD/MSc course on Machine Learning Security (Univ. Cagliari)☆210Updated last month
- Hardened Extension of the Adversarial Robustness Toolbox (HEART) supports assessment of adversarial AI vulnerabilities in Test & Evaluati…☆13Updated 3 weeks ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆267Updated 2 months ago
- Discount jupyter.☆51Updated 4 months ago
- Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central …☆47Updated last year
- Lint for privacy☆27Updated 2 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆78Updated 2 years ago
- PyTorch package to train and audit ML models for Individual Fairness☆66Updated 2 months ago
- ☆73Updated 2 years ago
- Official implementation of the paper "Increasing Confidence in Adversarial Robustness Evaluations"☆18Updated last month
- ☆315Updated 2 years ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆113Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 11 months ago
- ☆470Updated 3 months ago
- Datasets derived from US census data☆268Updated last year
- A toolkit for tools and techniques related to the privacy and compliance of AI models.☆106Updated 2 months ago
- Algorithms for Privacy-Preserving Machine Learning in JAX☆95Updated 3 weeks ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆97Updated 4 months ago
- A platform for managing machine learning experiments☆863Updated 2 weeks ago
- ☆39Updated 2 years ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆68Updated 2 years ago
- SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models☆52Updated this week