TrustAI-laboratory / Many-Shot-Jailbreaking-DemoLinks

Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and other leading AI organizations. Resources
11Updated 11 months ago

Alternatives and similar repositories for Many-Shot-Jailbreaking-Demo

Users that are interested in Many-Shot-Jailbreaking-Demo are comparing it to the libraries listed below

Sorting: