SlowLow999 / UltraBr3aksLinks
sharing NEW strong AI jailbreaks of multiple vendors (LLMs)
β152Updated this week
Alternatives and similar repositories for UltraBr3aks
Users that are interested in UltraBr3aks are comparing it to the libraries listed below
Sorting:
- π ZetaLib - The only AI Library you needβ239Updated 2 weeks ago
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injectionβ409Updated 7 months ago
- A steganography tool for automatically encoding images that act as prompt injections/jailbreaks for AIs with code interpreter and vision.β215Updated last year
- β380Updated last week
- β131Updated last month
- MCP server for maigret, a powerful OSINT tool that collects user account information from various public sources.β210Updated 9 months ago
- Exploit prompts and roleplay techniques for bypassing AI model restrictions.β559Updated 2 months ago
- A trial-and-error approach to temperature opimization for LLMs. Runs the same prompt at many temperatures and selects the best output autβ¦β141Updated 3 months ago
- Prompt leak of Google Gemini Pro (Bard version) system prompts, instructions, and guidelinesβ224Updated 2 years ago
- A curated list of OSINT MCP servers. Pull requests are welcomed!β64Updated 8 months ago
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jaiβ¦β1,054Updated 2 weeks ago
- Latest AI Jailbreak Payloads & Exploit Techniques for GPT, QWEN, and all LLM Modelsβ43Updated 3 months ago
- Vibe Coding free starter kit: https://vibe-codingschool.com/β604Updated last month
- Penetration Testing AI Assistant based on open source LLMs.β111Updated 8 months ago
- Writeups of challenges and CTFs I participated inβ84Updated 3 months ago
- HacxGPT Jailbreak π: Unlock the full potential of top AI models like ChatGPT, LLaMA, and more with the world's most advanced Jailbreak pβ¦β136Updated last year
- β241Updated 3 weeks ago
- A repo for all the jailbreaksβ31Updated 2 months ago
- Bypass restricted and censored content on AI chat prompts πβ200Updated 3 months ago
- β23Updated last year
- ghostcrew is an AI agent framework for bug bounty hunting, red-team operations, and penetration testing. It integrates LLM autonomy, multβ¦β497Updated last week
- Resources for reverse engineering βunofficial APIsβ.β65Updated 8 months ago
- NOT for educational purposes: An MCP server for professional penetration testers including STDIO/HTTP/SSE support, nmap, go/dirbuster, niβ¦β105Updated 5 months ago
- This repository documents a series of experiments focused on adversarial prompting and jailbreaks against large language models. It is paβ¦β74Updated 4 months ago
- Pentest Copilot is an AI-powered browser based ethical hacking assistant tool designed to streamline pentesting workflows.β233Updated 2 weeks ago
- JAILBREAK PROMPTS FOR ALL MAJOR AI MODELSβ119Updated last year
- MCP server exposing multiple OSINT tools for AI assistants like Claudeβ129Updated 4 months ago
- A list of articles, videos, and tools related to the use of AI for OSINT.β164Updated 3 weeks ago
- Open-source LLM Prompt-Injection and Jailbreaking Playgroundβ25Updated 4 months ago
- [SPOILER ALERT] Solutions to Gandalf, the prompt hacking/red teaming game from Lakera AIβ46Updated last year