AnthenaMatrix / Many-Shot-Jailbreaking

Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and other leading AI organizations.
18Updated 7 months ago

Related projects

Alternatives and complementary repositories for Many-Shot-Jailbreaking