terjanq / hack-a-promptLinks
Tools and our test data developed for the HackAPrompt 2023 competition
β40Updated last year
Alternatives and similar repositories for hack-a-prompt
Users that are interested in hack-a-prompt are comparing it to the libraries listed below
Sorting:
- β65Updated 5 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β288Updated last year
- A collection of prompt injection mitigation techniques.β23Updated last year
- β121Updated last month
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ86Updated this week
- Code snippets to reproduce MCP tool poisoning attacks.β145Updated 3 months ago
- future-proof vulnerability detection benchmark, based on CVEs in open-source reposβ58Updated last week
- CVE-Bench: A Benchmark for AI Agentsβ Ability to Exploit Real-World Web Application Vulnerabilitiesβ62Updated last month
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents onβ¦β44Updated 3 weeks ago
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β23Updated last year
- XBOW Validation Benchmarksβ168Updated last month
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Promptsβ504Updated 9 months ago
- β48Updated 9 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systemsβ83Updated 5 months ago
- [CCS'24] An LLM-based, fully automated fuzzing tool for option combination testing.β84Updated 3 months ago
- A benchmark for prompt injection detection systems.β122Updated 2 months ago
- LLM security and privacyβ48Updated 9 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aβ¦β393Updated last year
- β60Updated 2 months ago
- https://arxiv.org/abs/2412.02776β59Updated 7 months ago
- VulZoo: A Comprehensive Vulnerability Intelligence Dataset (ASE 2024 Demo)β54Updated 3 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β112Updated last year
- β20Updated last year
- Payloads for Attacking Large Language Modelsβ92Updated last month
- β25Updated last year
- The automated prompt injection framework for LLM-integrated applications.β217Updated 10 months ago
- A library to produce cybersecurity exploitation routes (exploit flows). Inspired by TensorFlow.β35Updated last year
- The fastest Trust Layer for AI Agentsβ138Updated last month
- CS-Eval is a comprehensive evaluation suite for fundamental cybersecurity models or large language models' cybersecurity ability.β43Updated 7 months ago
- Challenge Problem #1 - Linux Kernel (NOTE: This code does not reflect the active state of what will be used at competition time, please rβ¦β53Updated last year