X1aoyangXu / FORALinks
Official code of the paper "A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning".
☆15Updated last year
Alternatives and similar repositories for FORA
Users that are interested in FORA are comparing it to the libraries listed below
Sorting:
- GAN you see me? enhanced data reconstruction attacks against split inference - NeurIPS 2023☆12Updated 9 months ago
- From Head to Tail: Efficient Black-box Model Inversion Attack via Long-tailed Learning - CVPR 2025☆16Updated 9 months ago
- The code for our Updates-Leak paper☆17Updated 5 years ago
- Multi-metrics adaptively identifies backdoors in Federated learning☆37Updated 4 months ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- ☆24Updated last year
- Official Repository for ResSFL (accepted by CVPR '22)☆25Updated 3 years ago
- Backdoor detection in Federated learning with similarity measurement☆26Updated 3 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆79Updated 2 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆35Updated 3 years ago
- ☆55Updated 2 years ago
- Code for NDSS '25 paper "Passive Inference Attacks on Split Learning via Adversarial Regularization"☆12Updated last year
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Updated 3 years ago
- [Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federa…☆48Updated 3 months ago
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆188Updated 3 months ago
- TFLlib-Trustworthy Federated Learning Library and Benchmark☆62Updated last month
- ☆70Updated 3 years ago
- Code and full version of the paper "Hijacking Attacks against Neural Network by Analyzing Training Data"☆14Updated last year
- Code for ML Doctor☆92Updated last year
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Updated last year
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆87Updated 2 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆42Updated last year
- Webank AI☆42Updated 10 months ago
- ☆38Updated 4 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆23Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Updated last year
- Code for USENIX Security 2023 Paper "Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks"☆21Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆49Updated last year