hupe1980 / aisploit
๐ค๐ก๏ธ๐๐๐ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
โ23Updated 11 months ago
Alternatives and similar repositories for aisploit:
Users that are interested in aisploit are comparing it to the libraries listed below
- A collection of prompt injection mitigation techniques.โ22Updated last year
- All things specific to LLM Red Teaming Generative AIโ24Updated 6 months ago
- Chat4GPT Experiments for Securityโ11Updated 2 years ago
- https://arxiv.org/abs/2412.02776โ52Updated 5 months ago
- This repository is dedicated to providing comprehensive mappings of the OWASP Top 10 vulnerabilities for Large Language Models (LLMs) to โฆโ13Updated last year
- A library to produce cybersecurity exploitation routes (exploit flows). Inspired by TensorFlow.โ35Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systemsโ78Updated 3 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchโ71Updated last month
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wriโฆโ21Updated 4 months ago
- CVE-Bench: A Benchmark for AI Agentsโ Ability to Exploit Real-World Web Application Vulnerabilitiesโ42Updated last week
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.โ59Updated 5 months ago
- Payloads for Attacking Large Language Modelsโ82Updated 10 months ago
- โ64Updated 3 months ago
- ๐ง LLMFuzzer - Fuzzing Framework for Large Language Models ๐ง LLMFuzzer is the first open-source fuzzing framework specifically designed โฆโ274Updated last year
- This tool helps new security professionals actively learn how to address security concerns associated with open ports on a network deviceโฆโ22Updated last month
- โ34Updated 7 months ago
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.โ110Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.โ161Updated last year
- using ML models for red teamingโ43Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.โ28Updated 4 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the victโฆโ41Updated 2 months ago
- โ24Updated 2 years ago
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspireโฆโ55Updated last year
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Overโฆโ12Updated last year
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracleโ110Updated 2 years ago
- Penetration Testing AI Assistant based on open source LLMs.โ74Updated last month
- โ13Updated 4 months ago
- Prompt Injections Everywhereโ119Updated 9 months ago
- Secure Jupyter Notebooks and Experimentation Environmentโ74Updated 3 months ago
- The source code (including datasets) of V1SCAN (USENIX Security 2023; will be uploaded).โ40Updated last year