evrenyal / langtsunamiLinks
Multi-Lingual GenAI Red Teaming Tool
β29Updated last year
Alternatives and similar repositories for langtsunami
Users that are interested in langtsunami are comparing it to the libraries listed below
Sorting:
- LLMBUS AI red team tool πβ73Updated last month
- β72Updated 7 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)β134Updated 9 months ago
- β12Updated 2 years ago
- β68Updated this week
- Payloads for AI Red Teaming and beyondβ280Updated 3 weeks ago
- A research project to add some brrrrrr to Burpβ189Updated 7 months ago
- Chista | Open Threat Intelligence Frameworkβ59Updated last year
- Payloads for Attacking Large Language Modelsβ99Updated 3 months ago
- LLM Testing Findings Templatesβ72Updated last year
- A YAML based format for describing tools to LLMs, like man pages but for robots!β78Updated 4 months ago
- A LLM explicitly designed for getting hackedβ160Updated 2 years ago
- "Sucosh" is an automated Source Code vulnerability scanner and assessment framework for Python(Flask-Django) & NodeJs capable of performiβ¦β38Updated last year
- This repository was developed using .NET 7.0 API technology based on findings listed in the OWASP 2019 API Security Top 10.β53Updated last month
- source code for the offsecml frameworkβ41Updated last year
- Blackdagger is a DAG-based automation tool specifically used in DevOps, DevSecOps, MLOps, MLSecOps, and Continuous Red Teaming (CART).β111Updated 4 months ago
- Verizon Burp Extensions: AI Suiteβ138Updated 4 months ago
- β11Updated 3 years ago
- A Caldera plugin for the emulation of complete, realistic cyberattack chains.β56Updated 3 weeks ago
- Tree of Attacks (TAP) Jailbreaking Implementationβ115Updated last year
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Modelsβ77Updated this week
- An experimental project exploring the use of Large Language Models (LLMs) to solve HackTheBox machines autonomously.β71Updated this week
- β54Updated this week
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β299Updated last year
- All the principles of the AI modular structure that generates malicious code fragments sold on the dark webβ69Updated last year
- Prototype of Full Agentic Application Security Testing, FAAST = SAST + DAST + LLM agentsβ63Updated 4 months ago
- β10Updated last year
- Collection of all previous 1337UP CTF challenges.β74Updated 8 months ago
- A comprehensive PowerShell-based threat hunting and incident response framework for Windows environments, built around Sysmon event analyβ¦β35Updated 2 months ago
- LLM Supported Attack Scenario Creator from Code Reviewβ13Updated 10 months ago