ZhengyuZhao / AI-Security-and-Privacy-EventsLinks
A curated list of academic events on AI Security & Privacy
☆167Updated last year
Alternatives and similar repositories for AI-Security-and-Privacy-Events
Users that are interested in AI-Security-and-Privacy-Events are comparing it to the libraries listed below
Sorting:
- Implementations of data poisoning attacks against neural networks and related defenses.☆100Updated last year
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆302Updated 3 months ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆229Updated last year
- A curated list of trustworthy Generative AI papers. Daily updating...☆75Updated last year
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆280Updated 11 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆129Updated last year
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆210Updated 6 months ago
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆310Updated last month
- A list of recent papers about adversarial learning☆257Updated this week
- A compact toolbox for backdoor attacks and defenses.☆186Updated last year
- ☆44Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Updated 2 years ago
- Composite Backdoor Attacks Against Large Language Models☆21Updated last year
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆132Updated 2 years ago
- A unified benchmark problem for data poisoning attacks☆161Updated 2 years ago
- Code for ML Doctor☆91Updated last year
- ☆149Updated last year
- ☆68Updated 5 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Updated 5 years ago
- ☆100Updated 5 years ago
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆311Updated 5 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆103Updated 3 years ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆254Updated last month
- Papers I have collected and read in undergraduate and graduate period☆52Updated 2 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆83Updated 2 years ago
- ☆32Updated last year
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆111Updated last year
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- ☆363Updated last month
- ☆68Updated last year