TFLlib-Trustworthy Federated Learning Library and Benchmark
☆62Nov 15, 2025Updated 3 months ago
Alternatives and similar repositories for TFLlib
Users that are interested in TFLlib are comparing it to the libraries listed below
Sorting:
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- https://icml.cc/virtual/2023/poster/24354☆10Aug 15, 2023Updated 2 years ago
- ☆21Mar 17, 2025Updated 11 months ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆19Sep 18, 2025Updated 5 months ago
- Code of paper "AdvReverb: AdvReverb: Rethinking the Stealthiness of Audio Adversarial Examples to Human Perception"☆18Nov 26, 2023Updated 2 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆13Mar 10, 2023Updated 2 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- Seminar 2022☆23Jan 10, 2026Updated last month
- ☆27Nov 20, 2023Updated 2 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆28Oct 31, 2022Updated 3 years ago
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- ☆11Dec 23, 2024Updated last year
- ☆37Oct 17, 2024Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Aug 14, 2025Updated 6 months ago
- ☆12May 6, 2022Updated 3 years ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Jul 8, 2024Updated last year
- ☆37Feb 7, 2024Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Jun 29, 2025Updated 8 months ago
- BrainWash: A Poisoning Attack to Forget in Continual Learning☆12Apr 15, 2024Updated last year
- ☆10Jun 23, 2018Updated 7 years ago
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated 10 months ago
- [EMNLP 2025 Oral] IPIGuard: A Novel Tool Dependency Graph-Based Defense Against Indirect Prompt Injection in LLM Agents☆16Sep 16, 2025Updated 5 months ago
- Reading comprehension based question-answering model for news articles.☆11Jun 22, 2022Updated 3 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆39Dec 13, 2018Updated 7 years ago
- Attacks using out-of-distribution adversarial examples☆11Nov 19, 2019Updated 6 years ago
- [NeurIPS 2022] Explaining Graph Neural Networks with Structure-Aware Cooperative Games (GStarX)☆14Oct 20, 2022Updated 3 years ago