haizelabs / BEAST-implementationLinks
☆16Updated last year
Alternatives and similar repositories for BEAST-implementation
Users that are interested in BEAST-implementation are comparing it to the libraries listed below
Sorting:
- Tree of Attacks (TAP) Jailbreaking Implementation☆117Updated last year
- ☆66Updated 4 months ago
- General research for Dreadnode☆27Updated last year
- A utility to inspect, validate, sign and verify machine learning model files.☆65Updated 11 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆100Updated 9 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆92Updated last year
- Red-Teaming Language Models with DSPy☆250Updated 11 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- ☆29Updated 2 years ago
- Multi-agent system (MAS) hijacking demos☆39Updated this week
- Example agents for the Dreadnode platform☆22Updated last month
- Sphynx Hallucination Induction☆52Updated last year
- https://arxiv.org/abs/2412.02776☆67Updated last year
- Data Scientists Go To Jupyter☆68Updated 10 months ago
- CLI and API server for https://github.com/dreadnode/robopages☆38Updated last week
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10Updated 3 years ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆83Updated 8 months ago
- A library for red-teaming LLM applications with LLMs.☆29Updated last year
- ☆14Updated last year
- A prompt injection game to collect data for robust ML research☆68Updated last year
- Thorn in a HaizeStack test for evaluating long-context adversarial robustness.☆26Updated last year
- Code for the paper "Defeating Prompt Injections by Design"☆220Updated 7 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆132Updated 3 weeks ago
- ☆38Updated 7 months ago
- Code for the paper "Fishing for Magikarp"☆179Updated 8 months ago
- using ML models for red teaming☆45Updated 2 years ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆44Updated 11 months ago
- ☆188Updated last month
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆222Updated 4 months ago
- ☆11Updated last year