evrenyal / langtsunamiLinks
Multi-Lingual GenAI Red Teaming Tool
β27Updated 11 months ago
Alternatives and similar repositories for langtsunami
Users that are interested in langtsunami are comparing it to the libraries listed below
Sorting:
- LLMBUS AI red team tool πβ48Updated last week
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)β122Updated 6 months ago
- Tree of Attacks (TAP) Jailbreaking Implementationβ111Updated last year
- β66Updated 5 months ago
- A research project to add some brrrrrr to Burpβ181Updated 5 months ago
- Verizon Burp Extensions: AI Suiteβ131Updated 2 months ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!β75Updated 2 months ago
- source code for the offsecml frameworkβ41Updated last year
- β54Updated last week
- β41Updated this week
- β12Updated 2 years ago
- A modular external attack surface mapping tool integrating tools for automated reconnaissance and bug bounty workflows.β41Updated 3 months ago
- Integrate PyRIT in existing toolsβ28Updated 4 months ago
- π‘οΈ VIPER: Stay ahead of threats with AI-driven vulnerability intelligence. Prioritize CVEs effectively using NVD, EPSS, CISA KEV, and Goβ¦β63Updated this week
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Modelsβ61Updated this week
- β11Updated 2 years ago
- NOVA: The Prompt Pattern Matchingβ128Updated 2 months ago
- LLM Testing Findings Templatesβ72Updated last year
- This repository was developed using .NET 7.0 API technology based on findings listed in the OWASP 2019 API Security Top 10.β53Updated last year
- "Sucosh" is an automated Source Code vulnerability scanner and assessment framework for Python(Flask-Django) & NodeJs capable of performiβ¦β37Updated last year
- Payloads for AI Red Teaming and beyondβ115Updated this week
- A LLM explicitly designed for getting hackedβ153Updated last year
- β16Updated last year
- β43Updated 5 months ago
- Reference notes for Attacking and Defending Generative AI presentationβ64Updated 11 months ago
- An experimental project exploring the use of Large Language Models (LLMs) to solve HackTheBox machines autonomously.β57Updated 2 months ago
- Payloads for Attacking Large Language Modelsβ91Updated last month
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β163Updated last year
- AI-Powered, Local Pythonic Coding Agent ππ»β24Updated 4 months ago
- Ironsharp is a tool written in C# that detects CVEs caused by missing updates and privilege escalation vulnerabilities caused by misconfiβ¦β34Updated 3 years ago