Repello-AI / whistleblowerLinks
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
☆127Updated last year
Alternatives and similar repositories for whistleblower
Users that are interested in whistleblower are comparing it to the libraries listed below
Sorting:
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.☆225Updated 3 months ago
- CTF challenges designed and implemented in machine learning applications☆162Updated 11 months ago
- The fastest Trust Layer for AI Agents☆141Updated 2 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆164Updated 2 years ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆530Updated last week
- Jeopardy-style CTF challenge deployment and management tool.☆78Updated 3 weeks ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆347Updated last week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆113Updated last year
- Red-Teaming Language Models with DSPy☆203Updated 5 months ago
- ☆130Updated last month
- A collection of awesome resources related AI security☆278Updated last week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆164Updated last year
- Payloads for Attacking Large Language Models☆92Updated 2 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆315Updated last year
- A benchmark for prompt injection detection systems.☆124Updated 3 weeks ago
- ☆310Updated last month
- This repository curates a collection of monthly white papers focused on the latest LLM attack and defenses.☆23Updated 9 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆95Updated 3 months ago
- source for llmsec.net☆16Updated last year
- Dropbox LLM Security research code and results☆232Updated last year
- Your gateway to OWASP. Discover, engage, and help shape the future!☆153Updated this week
- ☆139Updated 2 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆402Updated last year
- Prompt Injection Primer for Engineers☆449Updated last year
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆261Updated 3 months ago
- A curated list of awesome LLM Red Teaming training, resources, and tools.☆22Updated 3 weeks ago
- A library for red-teaming LLM applications with LLMs.☆27Updated 9 months ago
- Every practical and proposed defense against prompt injection.☆503Updated 5 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆293Updated 11 months ago
- An Open Source CTF hosting platform☆57Updated 5 months ago