JosephTLucas / HackThisAILinks
Adversarial Machine Learning (AML) Capture the Flag (CTF)
☆112Updated last year
Alternatives and similar repositories for HackThisAI
Users that are interested in HackThisAI are comparing it to the libraries listed below
Sorting:
- CTF challenges designed and implemented in machine learning applications☆197Updated 2 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆313Updated last year
- A LLM explicitly designed for getting hacked☆165Updated 2 years ago
- ☆348Updated 6 months ago
- Payloads for Attacking Large Language Models☆114Updated 7 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆167Updated 2 years ago
- ☆126Updated 2 weeks ago
- An experimental project exploring the use of Large Language Models (LLMs) to solve HackTheBox machines autonomously.☆187Updated last week
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆263Updated 3 months ago
- ☆154Updated 4 months ago
- The IoT Security Testing Guide (ISTG) provides a comprehensive methodology for penetration tests in the IoT field, offering flexibility t…☆112Updated 5 months ago
- Official writeups for Business CTF 2024: The Vault Of Hope☆155Updated last year
- LLM Testing Findings Templates☆75Updated last year
- Code repository for "Machine Learning For Red Team Hackers".☆41Updated 5 years ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆116Updated last year
- Data Scientists Go To Jupyter☆68Updated 10 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆414Updated 5 months ago
- Scripts and examples for "From Day Zero to Zero Day" by Eugene Lim.☆192Updated last month
- CALDERA plugin for adversary emulation of AI-enabled systems☆107Updated 2 years ago
- Dropbox LLM Security research code and results☆251Updated last year
- A research project to add some brrrrrr to Burp☆196Updated 10 months ago
- A collection of awesome resources related AI security☆389Updated last week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆333Updated last year
- ChainReactor is a research project that leverages AI planning to discover exploitation chains for privilege escalation on Unix systems. T…☆56Updated last year
- Official writeups for Hack The Boo CTF 2023☆45Updated last year
- Collection of writeups on ICS/SCADA security.☆192Updated 2 months ago
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆152Updated last year
- Collection of all previous 1337UP CTF challenges.☆78Updated last year
- Search engine for CTF writeups with instant results.☆152Updated 10 months ago