Payloads for Attacking Large Language Models
☆131Jan 13, 2026Updated 3 months ago
Alternatives and similar repositories for pallms
Users that are interested in pallms are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Dropbox LLM Security research code and results☆256May 21, 2024Updated last year
- LLM prompt attacks for hacker CTFs via CTFd.☆14Dec 17, 2023Updated 2 years ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,565Aug 20, 2025Updated 7 months ago
- source for llmsec.net☆16Jul 24, 2024Updated last year
- Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external to…☆35Apr 9, 2026Updated last week
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Risks and targets for assessing LLMs & LLM vulnerabilities☆34May 27, 2024Updated last year
- Tree of Attacks (TAP) Jailbreaking Implementation☆119Feb 7, 2024Updated 2 years ago
- a security scanner for custom LLM applications☆1,175Dec 1, 2025Updated 4 months ago
- A LLM explicitly designed for getting hacked☆169Aug 2, 2023Updated 2 years ago
- This repository include Docker Machines for practicing on some of the Web Attacks.☆14Nov 20, 2023Updated 2 years ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆469Jan 31, 2024Updated 2 years ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆330Aug 22, 2024Updated last year
- Prompt Injections Everywhere☆197Aug 2, 2024Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Mar 12, 2024Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- A repo to store public scan data for my bug bounty hunting framework.☆23Dec 26, 2025Updated 3 months ago
- New ways of breaking app-integrated LLMs☆2,067Jul 17, 2025Updated 8 months ago
- ☆391Jun 25, 2025Updated 9 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆432Aug 1, 2025Updated 8 months ago
- ssh user enumeration☆12Mar 21, 2023Updated 3 years ago
- Prompt Injection Primer for Engineers☆578Aug 25, 2023Updated 2 years ago
- A research project to add some brrrrrr to Burp☆208Feb 16, 2026Updated 2 months ago
- Research Links for LLM Security☆17May 27, 2024Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆671Feb 16, 2026Updated 2 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆17Apr 15, 2025Updated last year
- ☆31Jul 14, 2023Updated 2 years ago
- Repository for CoSAI workstream 3, AI Risk Governance☆24Feb 18, 2026Updated last month
- LLM Prompt Injection Detector☆1,459Aug 7, 2024Updated last year
- ☆15Jun 7, 2024Updated last year
- LLM Testing Findings Templates☆74Feb 14, 2024Updated 2 years ago
- Blogpost series showcasing interesting cloud - web app security bugs☆49Jun 13, 2023Updated 2 years ago
- A writeup for the Gandalf prompt injection game.☆40Mar 22, 2026Updated 3 weeks ago
- ☆18Apr 15, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,704Oct 23, 2024Updated last year
- Machine Learning Attack Series☆75May 17, 2024Updated last year
- a CLI that provides a generic automation layer for assessing the security of ML models☆915Jul 18, 2025Updated 8 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆473Feb 26, 2024Updated 2 years ago
- Data Scientists Go To Jupyter☆68Mar 3, 2025Updated last year
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆31Apr 23, 2024Updated last year
- This is a python version of samesame repo to generate homograph strings☆24Aug 22, 2018Updated 7 years ago