xsankar / AI-Red-Teaming
All things specific to LLM Red Teaming Generative AI
β23Updated 5 months ago
Alternatives and similar repositories for AI-Red-Teaming:
Users that are interested in AI-Red-Teaming are comparing it to the libraries listed below
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β23Updated 10 months ago
- The automated prompt injection framework for LLM-integrated applications.β187Updated 6 months ago
- Payloads for Attacking Large Language Modelsβ77Updated 8 months ago
- Integrate PyRIT in existing toolsβ15Updated 3 weeks ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ59Updated last month
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β269Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wriβ¦β19Updated 3 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)β91Updated 3 months ago
- Tree of Attacks (TAP) Jailbreaking Implementationβ105Updated last year
- Data Scientists Go To Jupyterβ62Updated 3 weeks ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β275Updated 7 months ago
- β42Updated last month
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.β94Updated 10 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β160Updated last year
- source code for the offsecml frameworkβ38Updated 9 months ago
- A collection of awesome resources related AI securityβ192Updated last month
- A benchmark for prompt injection detection systems.β98Updated last month
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.β60Updated 9 months ago
- Every practical and proposed defense against prompt injection.β412Updated last month
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β301Updated 3 months ago
- A LLM explicitly designed for getting hackedβ139Updated last year
- β203Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.β24Updated 2 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)β57Updated 2 months ago
- Dropbox LLM Security research code and resultsβ221Updated 10 months ago
- A collection of prompt injection mitigation techniques.β20Updated last year
- β27Updated 2 months ago
- CALDERA plugin for adversary emulation of AI-enabled systemsβ93Updated last year
- CTF challenges designed and implemented in machine learning applicationsβ139Updated 7 months ago
- ATLAS tactics, techniques, and case studies dataβ58Updated 2 weeks ago