poloclub / bluff
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
☆23Updated last year
Alternatives and similar repositories for bluff:
Users that are interested in bluff are comparing it to the libraries listed below
- Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks☆17Updated 2 years ago
- Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)☆55Updated 4 years ago
- Exploring unprecedented avenues for data harvesting in the metaverse☆18Updated last year
- Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks☆42Updated 3 years ago
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆52Updated 6 years ago
- FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning☆38Updated last year
- ☆123Updated 3 years ago
- Towards Reverse-Engineering Black-Box Neural Networks, ICLR'18☆55Updated 5 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆50Updated 6 months ago
- Code for paper "Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers"☆17Updated 2 years ago
- Code for our ICLR Trustworthy ML 2020 workshop paper "Improved Image Wasserstein Attacks and Defenses"☆14Updated 4 years ago
- ☆38Updated 3 years ago
- Object Sensing and Cognition for Adversarial Robustness☆20Updated last year
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 6 years ago
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆86Updated 7 years ago
- Learning perturbation sets for robust machine learning☆65Updated 3 years ago
- Universal Robustness Evaluation Toolkit (for Evasion)☆32Updated last year
- ☆12Updated 5 years ago
- ☆29Updated 6 years ago
- Certifying Geometric Robustness of Neural Networks☆16Updated 2 years ago
- Codes for reproducing the white-box adversarial attacks in “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” …☆21Updated 6 years ago
- Developing adversarial examples and showing their semantic generalization for the OpenAI CLIP model (https://github.com/openai/CLIP)☆26Updated 4 years ago
- ☆64Updated 4 years ago
- ☆18Updated 10 months ago
- SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression☆81Updated 2 years ago
- ☆23Updated 8 months ago
- Repository for the paper Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning☆36Updated last year
- Codes for reproducing the black-box adversarial attacks in “ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Network…☆58Updated 5 years ago
- A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020☆35Updated 2 years ago
- A curated list on the literature of autoencoders for representation learning.☆30Updated 4 years ago