forcesunseen / llm-hackers-handbookLinks
A guide to LLM hacking: fundamentals, prompt injection, offense, and defense
☆175Updated 2 years ago
Alternatives and similar repositories for llm-hackers-handbook
Users that are interested in llm-hackers-handbook are comparing it to the libraries listed below
Sorting:
- A LLM explicitly designed for getting hacked☆163Updated 2 years ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- Payloads for Attacking Large Language Models☆112Updated 6 months ago
- Prompt Injections Everywhere☆169Updated last year
- Prompt Injection Primer for Engineers☆532Updated 2 years ago
- ☆343Updated 5 months ago
- The notebook for my talk - ChatGPT: Your Red Teaming Ally☆50Updated 2 years ago
- Penetration Testing AI Assistant based on open source LLMs.☆111Updated 8 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆310Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆34Updated 11 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆404Updated 4 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆76Updated last year
- Dropbox LLM Security research code and results☆248Updated last year
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracle☆109Updated 2 years ago
- an extension for Burp Suite to allow researchers to utilize GPT for analys is of HTTP requests and responses☆112Updated 2 years ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆92Updated 6 months ago
- ☆100Updated 2 weeks ago
- This repository contains various attack against Large Language Models.☆122Updated last year
- The Arcanum Prompt Injection Taxonomy☆338Updated 4 months ago
- A ChatGPT based penetration testing findings generator.☆133Updated 2 years ago
- Payloads for AI Red Teaming and beyond☆309Updated 3 months ago
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆260Updated 2 months ago
- ☆113Updated this week
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆150Updated 11 months ago
- Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external to…☆32Updated last year
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆402Updated 7 months ago
- A research project to add some brrrrrr to Burp☆196Updated 10 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆90Updated last week
- LLM Testing Findings Templates☆75Updated last year
- Reference notes for Attacking and Defending Generative AI presentation☆67Updated last year