AdamtayZzz / AI-security-related-paper-list
Papers I have collected and read in undergraduate and graduate period
☆47Updated last year
Related projects ⓘ
Alternatives and complementary repositories for AI-security-related-paper-list
- ☆62Updated 4 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆49Updated last week
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆47Updated 2 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆46Updated 6 years ago
- Code for ML Doctor☆86Updated 3 months ago
- ☆23Updated 3 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆118Updated 2 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆18Updated 4 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆50Updated 2 years ago
- A curated list of academic events on AI Security & Privacy☆135Updated 3 months ago
- ☆16Updated 2 years ago
- ☆23Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆64Updated 3 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆46Updated 2 years ago
- ☆23Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- Code release for DeepJudge (S&P'22)☆51Updated last year
- ☆76Updated 3 years ago
- ☆14Updated last month
- Anti-Backdoor learning (NeurIPS 2021)☆78Updated last year
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆28Updated 5 months ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆91Updated 2 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆80Updated 3 years ago
- ☆91Updated 4 years ago
- ☆48Updated 3 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆31Updated 8 months ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆123Updated 7 months ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆16Updated 4 years ago
- Craft poisoned data using MetaPoison☆47Updated 3 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆30Updated 2 weeks ago