Reapor-Yurnero / imprompter
Codebase of https://arxiv.org/abs/2410.14923
☆27Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for imprompter
- Dropbox LLM Security research code and results☆216Updated 5 months ago
- Red-Teaming Language Models with DSPy☆142Updated 7 months ago
- ☆26Updated this week
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆193Updated 8 months ago
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆19Updated 6 months ago
- A benchmark for prompt injection detection systems.☆86Updated 2 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆46Updated 6 months ago
- Machine Learning Attack Series☆56Updated 5 months ago
- AI agent with RAG+ReAct on Indian Constitution & BNS☆43Updated 3 weeks ago
- A trace analysis tool for AI agents.☆119Updated last month
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆309Updated 9 months ago
- ☆57Updated 2 weeks ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆72Updated 5 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆84Updated 8 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆107Updated 8 months ago
- DevOps AI Assistant CLI. Ask questions about your AWS services, cloudwatch metrics, and billing.☆63Updated 3 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆85Updated 5 months ago
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆112Updated last year
- My inputs for the LLM Gandalf made by Lakera☆36Updated last year
- PII Masker is an open-source tool for protecting sensitive data by automatically detecting and masking PII using advanced AI, powered by …☆40Updated 2 weeks ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆38Updated 10 months ago
- Every practical and proposed defense against prompt injection.☆339Updated 5 months ago
- Payloads for Attacking Large Language Models☆63Updated 4 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆121Updated 10 months ago
- ☆20Updated last month
- A writeup for the Gandalf prompt injection game.☆36Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.☆17Updated 3 weeks ago
- Code for the website www.jailbreakchat.com☆74Updated last year
- Secure Jupyter Notebooks and Experimentation Environment☆55Updated 3 weeks ago
- ATLAS tactics, techniques, and case studies data☆49Updated last month