ZhengyuZhao / AI-Security-and-Privacy-Events
A curated list of academic events on AI Security & Privacy
☆141Updated 5 months ago
Alternatives and similar repositories for AI-Security-and-Privacy-Events:
Users that are interested in AI-Security-and-Privacy-Events are comparing it to the libraries listed below
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆192Updated 10 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆219Updated 2 weeks ago
- Code for ML Doctor☆85Updated 5 months ago
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆286Updated 5 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆117Updated 2 months ago
- A compact toolbox for backdoor attacks and defenses.☆159Updated 6 months ago
- ☆141Updated 3 months ago
- Implementations of data poisoning attacks against neural networks and related defenses.☆73Updated 6 months ago
- An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)☆166Updated last year
- A list of recent papers about adversarial learning☆107Updated this week
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆236Updated 2 months ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆93Updated 2 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆120Updated 3 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆148Updated 3 weeks ago
- ☆64Updated 4 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆75Updated last year
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆30Updated 3 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆96Updated 5 months ago
- ☆44Updated last year
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆122Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆68Updated 11 months ago
- ☆79Updated 3 years ago
- Papers I have collected and read in undergraduate and graduate period☆48Updated last year
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆34Updated 6 months ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆165Updated last week
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆65Updated 7 months ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆82Updated 3 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆81Updated last year
- A unified benchmark problem for data poisoning attacks☆152Updated last year