idllresearch / malicious-gpt
[USENIX Security '24] Dataset associated with real-world malicious LLM applications, including 45 malicious prompts for generating malicious content, malicious responses from LLMs, 182 real-world jailbreak prompts, keywords related to LLMs, etc.
☆53Updated last month
Related projects ⓘ
Alternatives and complementary repositories for malicious-gpt
- Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, data…☆292Updated this week
- A Survey on Large Language Models for Software Engineering☆152Updated last week
- A Systematic Literature Review on Large Language Models for Automated Program Repair☆119Updated last week
- Continuous Learning for Android Malware Detection (USENIX Security 2023)☆58Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆458Updated 2 months ago
- Agent Security Bench (ASB)☆38Updated last week
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆142Updated 2 months ago
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆56Updated 2 years ago
- ☆14Updated last month
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆54Updated last month
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆90Updated last month
- ☆96Updated 3 months ago
- This repo collects the best papers from top 4 computer security conferences, including IEEE S&P, ACM CCS, USENIX Security, and NDSS.☆62Updated 4 months ago
- ☆21Updated 3 weeks ago
- ☆22Updated 3 years ago
- ☆9Updated 2 months ago
- the instructions about request access to AdvDroidZero☆10Updated 7 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆22Updated 2 weeks ago
- The automated prompt injection framework for LLM-integrated applications.☆161Updated 2 months ago
- An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)☆149Updated last year
- Academic Papers about LLM Application on Security☆106Updated 5 months ago
- Siren: Byzantine-robust Federated Learning via Proactive Alarming (SoCC '21)☆11Updated 7 months ago
- ☆13Updated 2 months ago
- This is a benchmark for evaluating the vulnerability discovery ability of automated approaches including Large Language Models (LLMs), de…☆60Updated last month
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆44Updated this week
- Statistics of acceptance rate for the top conferences: Oakland, CCS, USENIX Security, NDSS.☆110Updated 2 weeks ago
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models☆72Updated 2 months ago
- Using Wasserstein Generative Adversarial Network to fool intrusion detection systems (IDS) into believing that malicious traffic is norma…☆49Updated last year
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆215Updated last week
- ☆35Updated 9 months ago