LLM security and privacy
β54Oct 15, 2024Updated last year
Alternatives and similar repositories for LLM-security-and-privacy
Users that are interested in LLM-security-and-privacy are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Papers and resources related to the security and privacy of LLMs π€β567Jun 8, 2025Updated 9 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilitiesβ34May 27, 2024Updated last year
- A curation of awesome tools, documents and projects about LLM Security.β1,554Aug 20, 2025Updated 7 months ago
- Whispers in the Machine: Confidentiality in Agentic Systemsβ43Dec 11, 2025Updated 3 months ago
- Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024β22May 10, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI β’ AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Overβ¦β13Aug 21, 2023Updated 2 years ago
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Pluginsβ29Jul 29, 2024Updated last year
- π€« husher - Encode text to be hidden from human eyes but visible to LLMsβ12Jan 18, 2024Updated 2 years ago
- LLM | Security | Operations in one github repo with good links and pictures.β96Updated this week
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β26May 16, 2024Updated last year
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).β1,911Mar 16, 2026Updated last week
- List of papers on cryptography assisted deep learning privacy computationβ18Dec 29, 2025Updated 3 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.β86Jan 19, 2025Updated last year
- Tool based on @gaasedelen's lighthouse frida tool modified for capturing coverage of Android executables.β21Sep 16, 2023Updated 2 years ago
- NordVPN Special Discount Offer β’ AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Code for our paper "Localizing Lying in Llama"β13Apr 24, 2025Updated 11 months ago
- The command-line client for Journalβ12Oct 26, 2024Updated last year
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"β14Nov 17, 2023Updated 2 years ago
- Easy Setup, File-based, Offline Capable Federated Learning and Computationsβ22Feb 11, 2026Updated last month
- Blogs that I'm actively following.β14Sep 17, 2023Updated 2 years ago
- The repo of "Coral: Maliciously Secure Computation Framework for Packed and Mixed Circuits" (CCS 2024)β12Sep 6, 2024Updated last year
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMsβ12Nov 7, 2024Updated last year
- News in Privacy-Preserving Machine Learningβ12Feb 5, 2020Updated 6 years ago
- Paper list of federated learning: About system designβ13Apr 13, 2022Updated 3 years ago
- End-to-end encrypted email - Proton Mail β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- β24Jan 15, 2026Updated 2 months ago
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)β10Jul 15, 2024Updated last year
- A collection of papers and libraries for performing multi-agent optimizationβ18Feb 7, 2026Updated last month
- [CVPR 2026] FocusUI: Efficient UI Grounding via Position-Preserving Visual Token Selectionβ27Feb 10, 2026Updated last month
- Droz_scan is a automated script, that runs all the queries of drozer in a single runβ26May 15, 2023Updated 2 years ago
- β11Sep 19, 2025Updated 6 months ago
- Secure Inference Resilient Against Malicious Clientsβ15May 3, 2022Updated 3 years ago
- β10Apr 28, 2020Updated 5 years ago
- This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Langβ¦β22Jul 3, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- π₯ Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeatiβ¦β70Aug 14, 2025Updated 7 months ago
- β13Jul 26, 2021Updated 4 years ago
- Mixture of Lora Expertsβ10Apr 7, 2024Updated last year
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provideβ¦β1,808Mar 20, 2026Updated last week
- β14Dec 3, 2022Updated 3 years ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β170Oct 13, 2023Updated 2 years ago
- β14Jul 17, 2025Updated 8 months ago