TrustAI-laboratory / Many-Shot-Jailbreaking-Demo
View external linksLinks

Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and other leading AI organizations. Resources
16Aug 6, 2024Updated last year

Alternatives and similar repositories for Many-Shot-Jailbreaking-Demo

Users that are interested in Many-Shot-Jailbreaking-Demo are comparing it to the libraries listed below

Sorting:

Are these results useful?