IntelLabs / OSCAR
Object Sensing and Cognition for Adversarial Robustness
☆20Updated last year
Alternatives and similar repositories for OSCAR:
Users that are interested in OSCAR are comparing it to the libraries listed below
- ARMORY Adversarial Robustness Evaluation Test Bed☆177Updated last year
- Example external repository for interacting with armory.☆11Updated 2 years ago
- Source code for "Neural Anisotropy Directions"☆15Updated 4 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Algorithms for Privacy-Preserving Machine Learning in JAX☆92Updated 7 months ago
- ☆120Updated 3 years ago
- Discount jupyter.☆48Updated 2 years ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆252Updated 3 years ago
- A unified benchmark problem for data poisoning attacks☆152Updated last year
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated 6 months ago
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆183Updated 2 years ago
- Code for fast dpsgd implementations in JAX/TF☆58Updated 2 years ago
- DeepOBS: A Deep Learning Optimizer Benchmark Suite☆103Updated last year
- Code for Auditing DPSGD☆37Updated 2 years ago
- Datasets derived from US census data☆248Updated 8 months ago
- 💡 Adversarial attacks on explanations and how to defend them☆309Updated 2 months ago
- Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks☆23Updated last year
- Neural network verification in JAX☆141Updated last year
- 🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations☆114Updated 5 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning☆37Updated 9 months ago
- ☆50Updated 4 years ago
- ☆132Updated 5 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆52Updated 4 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- code for model-targeted poisoning☆12Updated last year
- CaPC is a method that enables collaborating parties to improve their own local heterogeneous machine learning models in a setting where b…☆26Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆33Updated 9 months ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆224Updated 5 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 4 years ago