Repello-AI / whistleblowerLinks
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
☆140Updated last month
Alternatives and similar repositories for whistleblower
Users that are interested in whistleblower are comparing it to the libraries listed below
Sorting:
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.☆347Updated last month
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆44Updated 9 months ago
- Payloads for Attacking Large Language Models☆112Updated 6 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆175Updated 2 years ago
- ☆178Updated 5 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆378Updated 2 months ago
- Jeopardy-style CTF challenge deployment and management tool.☆79Updated 2 weeks ago
- CTF challenges designed and implemented in machine learning applications☆189Updated 2 months ago
- LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.☆27Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Writeups of challenges and CTFs I participated in☆84Updated 3 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆27Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆404Updated 4 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆100Updated 7 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- ☆151Updated 3 months ago
- AIShield Watchtower: Dive Deep into AI's Secrets! 🔍 Open-source tool by AIShield for AI model insights & vulnerability scans. Secure you…☆199Updated 2 months ago
- Red-Teaming Language Models with DSPy☆244Updated 9 months ago
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆32Updated 11 months ago
- AI-powered workflow automation and AI Agents platform for AppSec, Fuzzing & Offensive Security. Automate vulnerability discovery with int…☆615Updated 3 weeks ago
- Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents,…☆759Updated this week
- A LLM explicitly designed for getting hacked☆163Updated 2 years ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆310Updated last year
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆76Updated last year
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆92Updated 6 months ago
- AI agent for autonomous cyber operations☆428Updated last week
- ☆55Updated 7 months ago
- Lightweight LLM Interaction Framework☆396Updated this week
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆58Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆598Updated 2 months ago