esbenkc / ai-cyberdefenseLinks
🔥 A repository for collecting cyberdefense thoughts, books, and documents about AI cyberdefense
☆13Updated 2 years ago
Alternatives and similar repositories for ai-cyberdefense
Users that are interested in ai-cyberdefense are comparing it to the libraries listed below
Sorting:
- Payloads for Attacking Large Language Models☆104Updated 5 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆52Updated last year
- An example vulnerable app that integrates an LLM☆25Updated last year
- ☆63Updated last week
- https://arxiv.org/abs/2412.02776☆66Updated 11 months ago
- Multi-agent system (MAS) hijacking demos☆39Updated 3 weeks ago
- Example agents for the Dreadnode platform☆19Updated 3 weeks ago
- source code for the offsecml framework☆43Updated last year
- ☆28Updated 2 years ago
- ☆65Updated 2 months ago
- My inputs for the LLM Gandalf made by Lakera☆48Updated 2 years ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- A curated list of awesome resources, tools, and other shiny things for GPT prompt engineering.☆55Updated 2 years ago
- Manual Prompt Injection / Red Teaming Tool☆46Updated last year
- Data Scientists Go To Jupyter☆67Updated 8 months ago
- Writeups of challenges and CTFs I participated in☆82Updated 2 months ago
- LLM prompt attacks for hacker CTFs via CTFd.☆13Updated last year
- An environment for testing AI agents against networks using Metasploit.☆45Updated 2 years ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆52Updated 2 years ago
- High signal information security sources Goggle.☆67Updated 2 years ago
- Curation of prompts that are known to be adversarial to large language models☆185Updated 2 years ago
- Deprecated-- this code has been moved into a class of ao_core, which requires a private beta license. This repo is kept up for posterity …☆11Updated 8 months ago
- ☆16Updated last year
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆88Updated 5 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆145Updated 11 months ago
- Central repo for talks and presentations☆46Updated last year
- ☆35Updated 3 weeks ago
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆218Updated 2 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆150Updated 5 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆32Updated last year