ethz-spylab / realistic-adv-examplesView external linksLinks
Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]
☆21Apr 15, 2024Updated last year
Alternatives and similar repositories for realistic-adv-examples
Users that are interested in realistic-adv-examples are comparing it to the libraries listed below
Sorting:
- ☆16Dec 7, 2021Updated 4 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Jan 10, 2023Updated 3 years ago
- ☆48Jun 19, 2024Updated last year
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Oct 17, 2022Updated 3 years ago
- Minimum viable code for the Decodable Information Bottleneck paper. Pytorch Implementation.☆11Oct 20, 2020Updated 5 years ago
- ☆10Jun 1, 2022Updated 3 years ago
- Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data☆13Sep 20, 2022Updated 3 years ago
- ☆34Jan 25, 2024Updated 2 years ago
- Generate custom text files for dataloader within UDA methods☆14May 24, 2023Updated 2 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- An amazing party app.☆14Oct 19, 2018Updated 7 years ago
- ☆35May 21, 2025Updated 8 months ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30May 16, 2022Updated 3 years ago
- Certified Patch Robustness via Smoothed Vision Transformers☆42Dec 17, 2021Updated 4 years ago
- ☆80May 22, 2022Updated 3 years ago
- ☆13May 1, 2024Updated last year
- ☆14Jun 6, 2023Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97May 25, 2023Updated 2 years ago
- ☆16Jul 20, 2023Updated 2 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- ☆15Jul 24, 2022Updated 3 years ago
- ☆20Oct 28, 2025Updated 3 months ago
- A School for All Seasons on Trustworthy Machine Learning☆12Jun 30, 2021Updated 4 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Feb 18, 2021Updated 4 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆38May 25, 2021Updated 4 years ago
- Source code for "Neural Anisotropy Directions"☆16Nov 17, 2020Updated 5 years ago
- Repository for reproducing `Model-Based Robust Deep Learning`☆16Jan 22, 2021Updated 5 years ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆46Apr 15, 2025Updated 10 months ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Jun 23, 2022Updated 3 years ago
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆19May 23, 2022Updated 3 years ago
- This is the repositoary for our paper published at ICML24.☆11Jun 11, 2025Updated 8 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Sep 11, 2023Updated 2 years ago
- ☆19Jun 5, 2023Updated 2 years ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆42Jan 21, 2024Updated 2 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆47Feb 28, 2023Updated 2 years ago
- Distilling Model Failures as Directions in Latent Space☆47Feb 8, 2023Updated 3 years ago
- Official PyTorch Implementation for Continual Learning and Private Unlearning☆18Jul 19, 2022Updated 3 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago