gdalmau / lakera-gandalf-solutions
My inputs for the LLM Gandalf made by Lakera
☆36Updated last year
Related projects ⓘ
Alternatives and complementary repositories for lakera-gandalf-solutions
- Payloads for Attacking Large Language Models☆64Updated 4 months ago
- A writeup for the Gandalf prompt injection game.☆36Updated last year
- Stage 1: Sensitive Email/Chat Classification for Adversary Agent Emulation (espionage). This project is meant to extend Red Reaper v1 whi…☆23Updated 3 months ago
- Repo with random useful scripts, utilities, prompts and stuff☆19Updated last month
- Offensive security use cases of ChatGPT☆76Updated last year
- using ML models for red teaming☆39Updated last year
- keep watching new bug bounty (vulnerability) postings.☆12Updated 7 months ago
- AI-powered bug hunter - vscode plugin.☆34Updated 2 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆19Updated last month
- Prompt Injections Everywhere☆86Updated 3 months ago
- ☆16Updated last year
- ☆62Updated last month
- A helpful gpt-based triage tool for BugCrowd bugbounty programs.☆41Updated last year
- source code for the offsecml framework☆35Updated 5 months ago
- Prompt Engineering Tool for AI Models with cli prompt or api usage☆1Updated last year
- The system consists of multiple AI agents that collaborate to strategize, generate commands, and execute scans based on the client's desc…☆30Updated 7 months ago
- A LLM explicitly designed for getting hacked☆131Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆43Updated this week
- ☆54Updated 7 months ago
- LLM Testing Findings Templates☆65Updated 9 months ago
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆193Updated 8 months ago
- ☆23Updated 9 months ago
- General research for Dreadnode☆17Updated 5 months ago
- De-redacting Elon's Email with Character-count Constrained Llama2 Decoding☆10Updated 8 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆46Updated last year
- An example vulnerable app that integrates an LLM☆13Updated 7 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆95Updated 9 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆48Updated 5 months ago
- ☆15Updated 4 months ago