RyanLucas3 / HR_Neural_NetworksLinks
Certified robustness of deep neural networks
☆19Updated last year
Alternatives and similar repositories for HR_Neural_Networks
Users that are interested in HR_Neural_Networks are comparing it to the libraries listed below
Sorting:
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Updated last year
- ☆58Updated 5 years ago
- Certified robustness "for free" using off-the-shelf diffusion models and classifiers☆44Updated 2 years ago
- ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Gold…☆25Updated 2 years ago
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆52Updated last year
- Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).☆24Updated 3 years ago
- ☆50Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆54Updated 2 years ago
- ☆23Updated 2 years ago
- Code for paper 'ZO-AdaMM: Zeroth-Order Adaptive MomentumMethod for Black-Box Optimization'☆31Updated 5 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- ☆32Updated 2 years ago
- Starter kit and data loading code for the Trojan Detection Challenge NeurIPS 2022 competition☆33Updated 2 years ago
- ☆16Updated 2 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Updated 4 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆39Updated 4 years ago
- Code for the paper "Better Diffusion Models Further Improve Adversarial Training" (ICML 2023)☆146Updated 2 years ago
- auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs☆332Updated 2 weeks ago
- PyTorch implementations of Adversarial defenses and utils.☆34Updated 2 years ago
- (ICML 2023) Feature learning in deep classifiers through Intermediate Neural Collapse: Accompanying code☆15Updated 2 years ago
- Camouflage poisoning via machine unlearning☆18Updated 5 months ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Updated 2 years ago
- [NeurIPS 2021] Fast Certified Robust Training with Short Warmup☆25Updated 6 months ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 4 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆56Updated 3 years ago
- Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion☆11Updated last year
- A unified benchmark problem for data poisoning attacks☆161Updated 2 years ago
- Code for paper "Robustness of Bayesian Neural Networks to Gradient-Based Attacks"☆17Updated last year
- ☆18Updated 3 years ago