IntelLabs / OSCARLinks
Object Sensing and Cognition for Adversarial Robustness
☆20Updated last year
Alternatives and similar repositories for OSCAR
Users that are interested in OSCAR are comparing it to the libraries listed below
Sorting:
- ☆128Updated 3 years ago
- ARMORY Adversarial Robustness Evaluation Test Bed☆183Updated last year
- Neural network verification in JAX☆145Updated 2 years ago
- Algorithms for Privacy-Preserving Machine Learning in JAX☆96Updated last month
- Datasets derived from US census data☆268Updated last year
- Discount jupyter.☆51Updated 5 months ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- Code for fast dpsgd implementations in JAX/TF☆59Updated 2 years ago
- ☆469Updated 4 months ago
- ☆157Updated 3 years ago
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆28Updated last month
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 6 years ago
- CHOP: An optimization library based on PyTorch, with applications to adversarial examples and structured neural network training.☆78Updated last year
- Train CIFAR10 to 94% accuracy in a few minutes/seconds. Based on https://github.com/davidcpage/cifar10-fast☆22Updated 2 years ago
- Official repository for CMU Machine Learning Department's 10721: "Philosophical Foundations of Machine Intelligence".☆262Updated 2 years ago
- DeepOBS: A Deep Learning Optimizer Benchmark Suite☆106Updated last year
- Lint for privacy☆27Updated 2 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆50Updated 10 months ago
- Einsum with einops style variable names☆17Updated last year
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"☆182Updated 4 years ago
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆185Updated 2 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 3 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Updated 5 years ago
- A School for All Seasons on Trustworthy Machine Learning☆12Updated 4 years ago
- ETH Robustness Analyzer for Deep Neural Networks☆339Updated 2 years ago
- Achieve error-rate fairness between societal groups for any score-based classifier.☆19Updated last week
- A Machine Learning workflow for Slurm.☆150Updated 4 years ago
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆196Updated 3 years ago
- ☆77Updated this week
- Website for Security and Privacy of Machine Learning☆14Updated 3 years ago