xsankar / AI-Red-TeamingLinks
All things specific to LLM Red Teaming Generative AI
☆28Updated 9 months ago
Alternatives and similar repositories for AI-Red-Teaming
Users that are interested in AI-Red-Teaming are comparing it to the libraries listed below
Sorting:
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆123Updated 7 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆68Updated last week
- ☆62Updated this week
- Reference notes for Attacking and Defending Generative AI presentation☆64Updated last year
- source code for the offsecml framework☆41Updated last year
- Payloads for Attacking Large Language Models☆93Updated 2 months ago
- ☆46Updated last week
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆65Updated last year
- Tree of Attacks (TAP) Jailbreaking Implementation☆114Updated last year
- Data Scientists Go To Jupyter☆65Updated 5 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆164Updated last year
- https://arxiv.org/abs/2412.02776☆59Updated 8 months ago
- ATLAS tactics, techniques, and case studies data☆78Updated 3 months ago
- ☆51Updated 2 weeks ago
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.☆149Updated last year
- using ML models for red teaming☆44Updated 2 years ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆24Updated last year
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆293Updated 11 months ago
- ☆15Updated 7 months ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆27Updated 7 months ago
- A Caldera plugin for the emulation of complete, realistic cyberattack chains.☆56Updated 5 months ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆100Updated 2 years ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆53Updated 3 months ago
- Autonomous Assumed Breach Penetration-Testing Active Directory Networks☆20Updated last month
- Secure Jupyter Notebooks and Experimentation Environment☆78Updated 6 months ago
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆244Updated 3 months ago
- ☆42Updated 7 months ago
- A LLM explicitly designed for getting hacked☆157Updated 2 years ago
- Prototype of Full Agentic Application Security Testing, FAAST = SAST + DAST + LLM agents☆63Updated 3 months ago
- A collection of prompt injection mitigation techniques.☆23Updated last year