hupe1980 / aisploit
π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
β23Updated 10 months ago
Alternatives and similar repositories for aisploit:
Users that are interested in aisploit are comparing it to the libraries listed below
- All things specific to LLM Red Teaming Generative AIβ23Updated 5 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ59Updated last month
- A collection of prompt injection mitigation techniques.β20Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systemsβ66Updated last month
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking courβ¦β49Updated 2 months ago
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.β94Updated 10 months ago
- Chat4GPT Experiments for Securityβ11Updated 2 years ago
- A library to produce cybersecurity exploitation routes (exploit flows). Inspired by TensorFlow.β33Updated last year
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β56Updated 4 months ago
- https://arxiv.org/abs/2412.02776β49Updated 3 months ago
- LLM security and privacyβ48Updated 5 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β160Updated last year
- β23Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wriβ¦β19Updated 3 months ago
- LLM | Security | Operations in one github repo with good links and pictures.β24Updated 2 months ago
- using ML models for red teamingβ43Updated last year
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defensesβ180Updated 2 months ago
- β64Updated 2 months ago
- β30Updated 5 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β269Updated last year
- Whispers in the Machine: Confidentiality in LLM-integrated Systemsβ34Updated 3 weeks ago
- Payloads for Attacking Large Language Modelsβ77Updated 8 months ago
- β40Updated last month
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discoverβ¦β38Updated last year
- YuraScannerβ27Updated last month
- This tool helps new security professionals actively learn how to address security concerns associated with open ports on a network deviceβ¦β22Updated 2 weeks ago
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracleβ110Updated last year
- future-proof vulnerability detection benchmark, based on CVEs in open-source reposβ51Updated last week
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking Gβ¦β23Updated 5 months ago
- A framework for identifying vulnerabilities in VS Code extensionsβ17Updated 8 months ago