eth-sri / llmprivacyView external linksLinks
☆70Feb 16, 2025Updated 11 months ago
Alternatives and similar repositories for llmprivacy
Users that are interested in llmprivacy are comparing it to the libraries listed below
Sorting:
- ☆20Feb 3, 2025Updated last year
- ☆21May 23, 2025Updated 8 months ago
- A Synthetic Dataset for Personal Attribute Inference (NeurIPS'24 D&B)☆50Jul 27, 2025Updated 6 months ago
- ☆13Oct 20, 2022Updated 3 years ago
- ☆37Oct 17, 2024Updated last year
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆123Feb 19, 2025Updated 11 months ago
- ☆13Mar 9, 2025Updated 11 months ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 8 months ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆103Aug 13, 2024Updated last year
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆28Jun 29, 2023Updated 2 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆19Sep 18, 2025Updated 4 months ago
- End-to-end codebase for finetuning LLMs (LLaMA 2, 3, etc.) with or without DP☆15Sep 23, 2024Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- ☆17Jun 18, 2025Updated 7 months ago
- ☆21Mar 20, 2025Updated 10 months ago
- ☆14May 8, 2024Updated last year
- ☆14Jun 6, 2023Updated 2 years ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- ☆37Oct 2, 2024Updated last year
- ☆21Dec 14, 2023Updated 2 years ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Oct 13, 2023Updated 2 years ago
- Official implementation of the WASP web agent security benchmark☆67Aug 12, 2025Updated 6 months ago
- ICLR'22 Programmatic Reinforcement Learning☆16Apr 15, 2023Updated 2 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆13Mar 10, 2023Updated 2 years ago
- Repository for reproducing `Model-Based Robust Deep Learning`☆16Jan 22, 2021Updated 5 years ago
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- Code for the paper: "Adversarial Examples for Models of Code"☆18Nov 16, 2020Updated 5 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Backdooring Multimodal Learning☆30May 4, 2023Updated 2 years ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆19Jun 7, 2023Updated 2 years ago
- 🔮Reasoning for Safer Code Generation; 🥇Winner Solution of Amazon Nova AI Challenge 2025☆35Aug 24, 2025Updated 5 months ago
- ☆47Dec 29, 2021Updated 4 years ago
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆25Jun 28, 2023Updated 2 years ago
- ☆23Oct 25, 2024Updated last year
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated 11 months ago