sherdencooper / prompt-injection
Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs
☆26Updated last year
Alternatives and similar repositories for prompt-injection
Users that are interested in prompt-injection are comparing it to the libraries listed below
Sorting:
- A better way of testing, inspecting, and analyzing AI Agent traces.☆35Updated last week
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆79Updated 3 months ago
- AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks☆45Updated 11 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆196Updated 2 weeks ago
- Sphynx Hallucination Induction☆54Updated 3 months ago
- ☆62Updated 10 months ago
- A framework for hosting and scaling AI agents.☆34Updated 5 months ago
- Private ChatGPT/Perplexity. Securely unlocks knowledge from confidential business information.☆64Updated 7 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆55Updated 2 months ago
- ☆72Updated 6 months ago
- ElasticSearch agent based on ElasticSearch, LangChain and ChatGPT 4☆47Updated last year
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆68Updated 8 months ago
- Code interpreter support for o1☆32Updated 8 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆488Updated 7 months ago
- ☆62Updated 5 months ago
- Prompt Builder is a small Python application that implements the principles outlined in the paper "Principled Instructions Are All You Ne…☆31Updated last year
- ☆100Updated 2 months ago
- ToolFuzz is a fuzzing framework designed to test your LLM Agent tools.☆17Updated 2 months ago
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆22Updated 2 months ago
- Agent computer interface for AI software engineer.☆73Updated this week
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆154Updated last week
- ☆28Updated last year
- ☆93Updated 8 months ago
- ☆50Updated 5 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆365Updated last year
- DeveloperGPT is a LLM-powered command line tool that enables natural language to terminal commands and in-terminal chat.☆43Updated 5 months ago
- 𝙏𝙪𝙧𝙣𝙞𝙣𝙜 𝙨𝙢𝙖𝙡𝙡 𝙩𝙖𝙨𝙠 𝙙𝙚𝙨𝙘𝙧𝙞𝙥𝙩𝙞𝙤𝙣𝙨 𝙞𝙣𝙩𝙤 𝙢𝙚𝙜𝙖 𝙥𝙧𝙤𝙢𝙥𝙩𝙨 𝙖𝙪𝙩𝙤𝙢𝙖𝙜𝙞𝙘𝙖𝙡𝙡𝙮.☆79Updated 9 months ago
- Lightweight and Flexible Library for Creating Agents and Multi-Agent Conversations 🤖☆24Updated last year
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆23Updated 6 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆52Updated 2 months ago