xsankar / AI-Red-TeamingLinks
All things specific to LLM Red Teaming Generative AI
☆29Updated 11 months ago
Alternatives and similar repositories for AI-Red-Teaming
Users that are interested in AI-Red-Teaming are comparing it to the libraries listed below
Sorting:
- Autonomous Assumed Breach Penetration-Testing Active Directory Networks☆23Updated last month
- using ML models for red teaming☆44Updated 2 years ago
- Payloads for Attacking Large Language Models☆102Updated 4 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆140Updated 9 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆114Updated last year
- ☆58Updated last week
- https://arxiv.org/abs/2412.02776☆62Updated 10 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆81Updated this week
- source code for the offsecml framework☆42Updated last year
- Data Scientists Go To Jupyter☆66Updated 7 months ago
- Prototype of Full Agentic Application Security Testing, FAAST = SAST + DAST + LLM agents☆63Updated 5 months ago
- ☆76Updated this week
- LLM | Security | Operations in one github repo with good links and pictures.☆58Updated 9 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆55Updated 5 months ago
- Curated resources, research, and tools for securing AI systems☆140Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆165Updated last year
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆26Updated last year
- An experimental project exploring the use of Large Language Models (LLMs) to solve HackTheBox machines autonomously.☆86Updated this week
- A very simple open source implementation of Google's Project Naptime☆169Updated 6 months ago
- Reference notes for Attacking and Defending Generative AI presentation☆66Updated last year
- ☆68Updated 2 months ago
- Verizon Burp Extensions: AI Suite☆138Updated 5 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆72Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆31Updated 9 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆42Updated 7 months ago
- ☆19Updated 9 months ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆78Updated 5 months ago
- AI-Powered, Local Pythonic Coding Agent 🐞💻☆24Updated 7 months ago
- Stage 1: Sensitive Email/Chat Classification for Adversary Agent Emulation (espionage). This project is meant to extend Red Reaper v1 whi…☆42Updated last year
- A LLM explicitly designed for getting hacked☆162Updated 2 years ago