AI-secure / TextGuardLinks
TextGuard: Provable Defense against Backdoor Attacks on Text Classification
☆13Updated 2 years ago
Alternatives and similar repositories for TextGuard
Users that are interested in TextGuard are comparing it to the libraries listed below
Sorting:
- ☆26Updated last year
- The implementation of the IEEE S&P 2024 paper MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Us…☆17Updated last year
- ☆27Updated 2 years ago
- ☆36Updated last year
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆43Updated 4 months ago
- ☆54Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆43Updated 3 years ago
- ☆30Updated 2 years ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆87Updated 2 years ago
- An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)☆199Updated 2 years ago
- Composite Backdoor Attacks Against Large Language Models☆21Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 6 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆81Updated 2 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆19Updated last year
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Updated 2 years ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆39Updated 4 months ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Updated 2 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆60Updated 6 years ago
- ☆77Updated 3 years ago
- Backdoor detection in Federated learning with similarity measurement☆26Updated 3 years ago
- ☆15Updated 2 years ago
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆33Updated 4 years ago
- Code for ML Doctor☆92Updated last year
- ☆17Updated last year
- ☆25Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆132Updated last year
- ☆72Updated 3 years ago
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆27Updated 9 months ago