☆72Feb 16, 2025Updated last year
Alternatives and similar repositories for llmprivacy
Users that are interested in llmprivacy are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆20Feb 3, 2025Updated last year
- ☆21May 23, 2025Updated 10 months ago
- A Synthetic Dataset for Personal Attribute Inference (NeurIPS'24 D&B)☆53Jul 27, 2025Updated 8 months ago
- CodexLeaks: Privacy Leaks from Code Generation Language Models in GitHub Copilot☆11Jul 11, 2023Updated 2 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆36Jun 1, 2025Updated 9 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆37Oct 2, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆135Feb 19, 2025Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- End-to-end codebase for finetuning LLMs (LLaMA 2, 3, etc.) with or without DP☆16Sep 23, 2024Updated last year
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆104Aug 13, 2024Updated last year
- ☆37Oct 17, 2024Updated last year
- ☆29Aug 31, 2025Updated 6 months ago
- ☆31Feb 27, 2025Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ☆14Mar 9, 2025Updated last year
- ☆23Oct 25, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆567Jun 8, 2025Updated 9 months ago
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆29Jun 29, 2023Updated 2 years ago
- ☆31Jul 14, 2023Updated 2 years ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆15Oct 13, 2023Updated 2 years ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- Code for USENIX Security 2023 Paper "Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks"☆21May 19, 2024Updated last year
- ☆14Jun 6, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Multi-dimensional analysis of orthogonal safety directions in LLM alignment☆21Mar 20, 2025Updated last year
- [KDD 2023] code for "Test accuracy vs. generalization gap: model selection in NLP without accessing training or testing data" https://arx…☆12Oct 17, 2022Updated 3 years ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- [NeurIPS 2024 / ICML 2025] LLM Quantization Attacks☆49Jan 15, 2026Updated 2 months ago
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10May 16, 2022Updated 3 years ago
- Official implementation of the WASP web agent security benchmark☆77Aug 12, 2025Updated 7 months ago
- ☆14May 8, 2024Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆13Mar 10, 2023Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- https://icml.cc/virtual/2023/poster/24354☆10Aug 15, 2023Updated 2 years ago
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆17Feb 16, 2025Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- ☆23Dec 14, 2023Updated 2 years ago
- ☆18Jun 18, 2025Updated 9 months ago
- ☆10May 31, 2023Updated 2 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago