Repello-AI / whistleblowerLinks
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
☆143Updated 2 months ago
Alternatives and similar repositories for whistleblower
Users that are interested in whistleblower are comparing it to the libraries listed below
Sorting:
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.☆355Updated last month
- CTF challenges designed and implemented in machine learning applications☆195Updated 2 months ago
- A curated list of awesome LLM Red Teaming training, resources, and tools.☆65Updated 3 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆179Updated 2 years ago
- Jeopardy-style CTF challenge deployment and management tool.☆79Updated this week
- Payloads for Attacking Large Language Models☆114Updated 6 months ago
- LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.☆28Updated last year
- A collection of awesome resources related AI security☆381Updated last week
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆47Updated 9 months ago
- ☆154Updated 3 months ago
- Writeups of challenges and CTFs I participated in☆84Updated 4 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆380Updated 3 months ago
- ☆182Updated 2 weeks ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆311Updated last year
- AIShield Watchtower: Dive Deep into AI's Secrets! 🔍 Open-source tool by AIShield for AI model insights & vulnerability scans. Secure you…☆200Updated 3 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.☆85Updated this week
- AI agent for autonomous cyber operations☆451Updated last month
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆28Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆602Updated 3 months ago
- AI-powered workflow automation and AI Agents platform for AppSec, Fuzzing & Offensive Security. Automate vulnerability discovery with int…☆660Updated last month
- Welcome to the ultimate list of resources for AI in cybersecurity. This repository aims to provide an organized collection of high-qualit…☆98Updated 2 weeks ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆167Updated 2 years ago
- A collection of prompt injection mitigation techniques.☆25Updated 2 years ago
- Curated resources, research, and tools for securing AI systems☆288Updated 2 weeks ago
- Code for the paper "Defeating Prompt Injections by Design"☆187Updated 6 months ago
- ☆348Updated 6 months ago
- Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents,…☆934Updated this week
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆21Updated 2 weeks ago
- Lightweight LLM Interaction Framework☆400Updated last week