Repello-AI / whistleblowerLinks
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
☆139Updated 2 weeks ago
Alternatives and similar repositories for whistleblower
Users that are interested in whistleblower are comparing it to the libraries listed below
Sorting:
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.☆254Updated 2 weeks ago
- CTF challenges designed and implemented in machine learning applications☆185Updated last month
- Learn about a type of vulnerability that specifically targets machine learning models☆370Updated 2 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆174Updated 2 years ago
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆42Updated 8 months ago
- A library for red-teaming LLM applications with LLMs.☆28Updated last year
- The fastest Trust Layer for AI Agents☆144Updated 5 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆587Updated last month
- Jeopardy-style CTF challenge deployment and management tool.☆79Updated 3 weeks ago
- Payloads for Attacking Large Language Models☆104Updated 5 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆397Updated 3 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆27Updated last year
- AI agent for autonomous cyber operations☆367Updated this week
- A curated list of awesome LLM Red Teaming training, resources, and tools.☆51Updated 2 months ago
- Writeups of challenges and CTFs I participated in☆82Updated 2 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆165Updated 2 years ago
- ☆168Updated 5 months ago
- ☆340Updated 4 months ago
- Welcome to the ultimate list of resources for AI in cybersecurity. This repository aims to provide an organized collection of high-qualit…☆92Updated 2 months ago
- A collection of awesome resources related AI security☆351Updated 2 months ago
- Your gateway to OWASP. Discover, engage, and help shape the future!☆228Updated this week
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆381Updated 6 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- ☆152Updated 2 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆308Updated last year
- Red-Teaming Language Models with DSPy☆235Updated 9 months ago
- A very simple open source implementation of Google's Project Naptime☆173Updated 7 months ago
- ☆107Updated this week
- Code for the paper "Defeating Prompt Injections by Design"☆150Updated 5 months ago
- A LLM explicitly designed for getting hacked☆163Updated 2 years ago