LostOxygen / llm-confidentiality
Whispers in the Machine: Confidentiality in LLM-integrated Systems
☆32Updated last week
Alternatives and similar repositories for llm-confidentiality:
Users that are interested in llm-confidentiality are comparing it to the libraries listed below
- LLM security and privacy☆44Updated 3 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆47Updated 5 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆66Updated 11 months ago
- A collection of automated evaluators for assessing jailbreak attempts.☆102Updated this week
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆167Updated last week
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆12Updated last year
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆31Updated 8 months ago
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆90Updated this week
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆110Updated 7 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 10 months ago
- TAP: An automated jailbreaking method for black-box LLMs☆138Updated last month
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆251Updated last week
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆79Updated this week
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 8 months ago
- ☆47Updated 6 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆282Updated last week
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆33Updated last week
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆280Updated 4 months ago
- Weak-to-Strong Jailbreaking on Large Language Models☆72Updated 11 months ago
- A prompt injection game to collect data for robust ML research☆50Updated this week
- Privacy backdoors☆51Updated 9 months ago
- Dataset for the Tensor Trust project☆36Updated 10 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆19Updated 8 months ago
- ☆17Updated 5 months ago
- ☆78Updated last year
- AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks☆35Updated 7 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆53Updated 2 months ago
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆132Updated 11 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆131Updated last month
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆39Updated 3 months ago