haizelabs / BEAST-implementationLinks
☆16Updated last year
Alternatives and similar repositories for BEAST-implementation
Users that are interested in BEAST-implementation are comparing it to the libraries listed below
Sorting:
- Tree of Attacks (TAP) Jailbreaking Implementation☆117Updated last year
- ☆29Updated 2 years ago
- ☆66Updated 3 months ago
- General research for Dreadnode☆27Updated last year
- A utility to inspect, validate, sign and verify machine learning model files.☆62Updated 11 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆100Updated 8 months ago
- Red-Teaming Language Models with DSPy☆249Updated 10 months ago
- Example agents for the Dreadnode platform☆22Updated 3 weeks ago
- Code for the paper "Defeating Prompt Injections by Design"☆205Updated 6 months ago
- Sphynx Hallucination Induction☆53Updated 11 months ago
- CLI and API server for https://github.com/dreadnode/robopages☆38Updated last week
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆82Updated 8 months ago
- Arxiv + Notion Sync☆20Updated 7 months ago
- Improve prompts for e.g. GPT3 and GPT-J using templates and hyperparameter optimization.☆42Updated 3 years ago
- https://arxiv.org/abs/2412.02776☆67Updated last year
- Data Scientists Go To Jupyter☆68Updated 10 months ago
- Multi-agent system (MAS) hijacking demos☆39Updated this week
- Thorn in a HaizeStack test for evaluating long-context adversarial robustness.☆26Updated last year
- using ML models for red teaming☆45Updated 2 years ago
- A prompt injection game to collect data for robust ML research☆65Updated 11 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆55Updated 8 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆43Updated 10 months ago
- Small tools to assist with using Large Language Models☆11Updated 2 years ago
- ☆71Updated last month
- A collection of prompt injection mitigation techniques.☆26Updated 2 years ago
- Manual Prompt Injection / Red Teaming Tool☆51Updated last year
- CompChomper is a framework for measuring how LLMs perform at code completion.☆19Updated 8 months ago
- ☆38Updated 7 months ago
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆221Updated 4 months ago