corca-ai / LLMFuzzAgent
[Corca / ML] Automatically solved Gandalf AI with LLM
☆47Updated last year
Alternatives and similar repositories for LLMFuzzAgent:
Users that are interested in LLMFuzzAgent are comparing it to the libraries listed below
- A benchmark for prompt injection detection systems.☆95Updated 4 months ago
- Red-Teaming Language Models with DSPy☆154Updated 9 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 10 months ago
- Fiddler Auditor is a tool to evaluate language models.☆174Updated 10 months ago
- Payloads for Attacking Large Language Models☆72Updated 6 months ago
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆27Updated last year
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆93Updated last year
- Dropbox LLM Security research code and results☆219Updated 8 months ago
- A framework-less approach to robust agent development.☆149Updated last week
- Curation of prompts that are known to be adversarial to large language models☆177Updated last year
- ☆70Updated 2 months ago
- ☆49Updated 4 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆326Updated 11 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆88Updated 7 months ago
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆18Updated 2 weeks ago
- A text embedding viewer for the Jupyter environment☆19Updated last year
- Approximation of the Claude 3 tokenizer by inspecting generation stream☆120Updated 6 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆129Updated last year
- Evaluate your LLM apps, RAG pipeline, any generated text, and more!Updated 8 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆282Updated last month
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆133Updated last year
- ☆26Updated 10 months ago
- ☆26Updated 2 months ago
- GPT2 fine-tuning pipeline with KerasNLP, TensorFlow, and TensorFlow Extended☆32Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆430Updated 3 months ago
- ☆470Updated last month
- Learn about a type of vulnerability that specifically targets machine learning models☆215Updated 7 months ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆39Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆251Updated last week
- source for llmsec.net☆13Updated 6 months ago