reds-lab / Universal_Pert_CertView external linksLinks
This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calculate the certified robustness against universal perturbations (UAP/ Backdoor) given a trained model.
☆12Feb 14, 2023Updated 2 years ago
Alternatives and similar repositories for Universal_Pert_Cert
Users that are interested in Universal_Pert_Cert are comparing it to the libraries listed below
Sorting:
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆19Jun 7, 2023Updated 2 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- Code of On L-p Robustness of Decision Stumps and Trees, ICML 2020☆10Aug 3, 2020Updated 5 years ago
- ☆10Oct 31, 2022Updated 3 years ago
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆15Jan 18, 2023Updated 3 years ago
- Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder(CVPR2020)☆12Aug 25, 2020Updated 5 years ago
- ☆19Mar 26, 2022Updated 3 years ago
- The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits☆18Jul 20, 2021Updated 4 years ago
- Implementation of TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems (https://arxiv.org/pdf/190…☆19Apr 13, 2023Updated 2 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆52Jun 5, 2024Updated last year
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- A Implementation of ICCV-2021(Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection)☆28Aug 27, 2021Updated 4 years ago
- [ICLR 2022] Training L_inf-dist-net with faster acceleration and better training strategies☆22Mar 16, 2022Updated 3 years ago
- Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).☆25Sep 4, 2022Updated 3 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Nov 18, 2024Updated last year
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Oct 18, 2022Updated 3 years ago
- ☆25Mar 24, 2023Updated 2 years ago
- Code for the paper "On the Adversarial Robustness of Visual Transformers"☆59Nov 18, 2021Updated 4 years ago
- β-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Verification☆31Nov 9, 2021Updated 4 years ago
- ☆27Nov 9, 2022Updated 3 years ago
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆27Apr 15, 2025Updated 9 months ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆61Jun 25, 2019Updated 6 years ago
- The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recogn…☆123May 9, 2023Updated 2 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆35Jan 9, 2023Updated 3 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 3 months ago
- Official repository for CVPR'23 paper: Detecting Backdoors in Pre-trained Encoders☆36Sep 25, 2023Updated 2 years ago
- ☆28Dec 31, 2020Updated 5 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago
- ☆30Jul 26, 2021Updated 4 years ago
- ☆32Mar 4, 2022Updated 3 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Apr 3, 2023Updated 2 years ago
- ☆23Jan 21, 2026Updated 3 weeks ago
- ☆10May 18, 2024Updated last year
- Starter kit and data loading code for the Trojan Detection Challenge NeurIPS 2022 competition☆33Jul 26, 2023Updated 2 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Jul 3, 2021Updated 4 years ago