terjanq / hack-a-promptLinks
Tools and our test data developed for the HackAPrompt 2023 competition
☆44Updated 2 years ago
Alternatives and similar repositories for hack-a-prompt
Users that are interested in hack-a-prompt are comparing it to the libraries listed below
Sorting:
- ☆65Updated 2 months ago
- ☆173Updated 5 months ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆94Updated last month
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆60Updated this week
- ☆96Updated 2 months ago
- ☆25Updated 2 years ago
- https://arxiv.org/abs/2412.02776☆66Updated 11 months ago
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆27Updated last year
- LLM security and privacy☆51Updated last year
- ☆121Updated 2 months ago
- A collection of prompt injection mitigation techniques.☆24Updated 2 years ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆27Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆327Updated last year
- Payloads for Attacking Large Language Models☆104Updated 5 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆102Updated 3 weeks ago
- Curation of prompts that are known to be adversarial to large language models☆186Updated 2 years ago
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracle☆109Updated 2 years ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆435Updated last year
- Code used to run the platform for the LLM CTF colocated with SaTML 2024☆27Updated last year
- ☆84Updated 3 months ago
- ☆21Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- CodeQL workshops for GitHub Universe☆96Updated 3 years ago
- A benchmark for prompt injection detection systems.☆150Updated 2 months ago
- Chat4GPT Experiments for Security☆11Updated 2 years ago
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆119Updated last week
- ☆17Updated 2 years ago
- 🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeati…☆62Updated 3 months ago
- Security Harness Engineering for Robust Program Analysis☆103Updated 3 months ago
- Cyber-Zero: Training Cybersecurity Agents Without Runtime☆39Updated this week