xsankar / AI-Red-TeamingLinks
All things specific to LLM Red Teaming Generative AI
☆28Updated 10 months ago
Alternatives and similar repositories for AI-Red-Teaming
Users that are interested in AI-Red-Teaming are comparing it to the libraries listed below
Sorting:
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆134Updated 9 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆77Updated this week
- Payloads for Attacking Large Language Models☆99Updated 3 months ago
- Reference notes for Attacking and Defending Generative AI presentation☆65Updated last year
- ☆68Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆164Updated last year
- using ML models for red teaming☆44Updated 2 years ago
- source code for the offsecml framework☆41Updated last year
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- Prototype of Full Agentic Application Security Testing, FAAST = SAST + DAST + LLM agents☆63Updated 4 months ago
- ☆64Updated last month
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆25Updated last year
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆70Updated last year
- ☆19Updated 9 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆55Updated 8 months ago
- Code snippets to reproduce MCP tool poisoning attacks.☆181Updated 5 months ago
- CTF challenges designed and implemented in machine learning applications☆166Updated last year
- A benchmark for prompt injection detection systems.☆136Updated 3 weeks ago
- Dropbox LLM Security research code and results☆235Updated last year
- We present MAPTA, a multi-agent system for autonomous web application security assessment that combines large language model orchestratio…☆58Updated 3 weeks ago
- https://arxiv.org/abs/2412.02776☆62Updated 9 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆41Updated 7 months ago
- ☆54Updated this week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆315Updated last year
- Curated resources, research, and tools for securing AI systems☆80Updated last week
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.☆171Updated last year
- A LLM explicitly designed for getting hacked☆160Updated 2 years ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆99Updated 2 years ago
- An example vulnerable app that integrates an LLM☆24Updated last year
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆299Updated last year