Repello-AI / whistleblowerLinks
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
☆137Updated 2 weeks ago
Alternatives and similar repositories for whistleblower
Users that are interested in whistleblower are comparing it to the libraries listed below
Sorting:
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.☆251Updated 2 weeks ago
- AI agent for autonomous cyber operations☆319Updated last week
- CTF challenges designed and implemented in machine learning applications☆178Updated 3 weeks ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆579Updated last month
- LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.☆25Updated last year
- Learn about a type of vulnerability that specifically targets machine learning models☆354Updated last month
- Your gateway to OWASP. Discover, engage, and help shape the future!☆206Updated this week
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆390Updated 2 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆26Updated last year
- Payloads for Attacking Large Language Models☆104Updated 4 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆173Updated 2 years ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- ☆165Updated 4 months ago
- ☆337Updated 4 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- Red-Teaming Language Models with DSPy☆235Updated 8 months ago
- Dropbox LLM Security research code and results☆237Updated last year
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆97Updated 6 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆304Updated last year
- Security Threats related with MCP (Model Context Protocol), MCP Servers and more☆37Updated 6 months ago
- The fastest Trust Layer for AI Agents☆144Updated 5 months ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆88Updated 5 months ago
- Jeopardy-style CTF challenge deployment and management tool.☆79Updated last week
- A library for red-teaming LLM applications with LLMs.☆28Updated last year
- A collection of awesome resources related AI security☆332Updated last month
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆216Updated last month
- Prompt Injection Primer for Engineers☆465Updated 2 years ago
- AIShield Watchtower: Dive Deep into AI's Secrets! 🔍 Open-source tool by AIShield for AI model insights & vulnerability scans. Secure you…☆197Updated last month
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆32Updated 9 months ago
- A LLM explicitly designed for getting hacked☆162Updated 2 years ago