Mobius-Dev / mobius-llm-adversityView on GitHub
This repository documents a series of experiments focused on adversarial prompting and jailbreaks against large language models. It is part of my personal red teaming portfolio, intended to showcase prompt engineering techniques, jailbreak persistence, and alignment failure analysis.
78Aug 15, 2025Updated 6 months ago

Alternatives and similar repositories for mobius-llm-adversity

Users that are interested in mobius-llm-adversity are comparing it to the libraries listed below

Sorting:

Are these results useful?