IntelLabs / OSCAR
Object Sensing and Cognition for Adversarial Robustness
☆20Updated last year
Alternatives and similar repositories for OSCAR:
Users that are interested in OSCAR are comparing it to the libraries listed below
- Discount jupyter.☆50Updated last month
- ARMORY Adversarial Robustness Evaluation Test Bed☆179Updated last year
- ☆123Updated 3 years ago
- Official repository for our NeurIPS 2021 paper "Unadversarial Examples: Designing Objects for Robust Vision"☆104Updated 9 months ago
- Source code for "Neural Anisotropy Directions"☆15Updated 4 years ago
- Algorithms for Privacy-Preserving Machine Learning in JAX☆95Updated last week
- DeepOBS: A Deep Learning Optimizer Benchmark Suite☆106Updated last year
- Convex Layerwise Adversarial Training (COLT)☆28Updated 4 years ago
- ☆54Updated 4 years ago
- Datasets derived from US census data☆258Updated 11 months ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆94Updated 3 years ago
- A benchmark for LLMs on complicated tasks in the terminal☆30Updated this week
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆183Updated 2 years ago
- ☆11Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- A unified benchmark problem for data poisoning attacks☆155Updated last year
- Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks☆23Updated last year
- Neural network verification in JAX☆142Updated last year
- Python library for argument and configuration management☆53Updated 2 years ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆52Updated 4 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 3 years ago
- ☆51Updated 4 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- The official repo for GCP-CROWN paper☆13Updated 2 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆50Updated 6 months ago
- Codes for reproducing the experimental results in "CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Net…☆27Updated 3 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Updated 4 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago