sherdencooper / prompt-injectionLinks
Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs
☆27Updated last year
Alternatives and similar repositories for prompt-injection
Users that are interested in prompt-injection are comparing it to the libraries listed below
Sorting:
- The fastest Trust Layer for AI Agents☆136Updated last week
- ☆72Updated 7 months ago
- Private ChatGPT/Perplexity. Securely unlocks knowledge from confidential business information.☆64Updated 7 months ago
- A collection of prompt injection mitigation techniques.☆23Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆80Updated 4 months ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆69Updated 9 months ago
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆22Updated 3 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated last year
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆23Updated 7 months ago
- LLM security and privacy☆49Updated 7 months ago
- ☆50Updated last week
- ☆20Updated last year
- Harness used to benchmark aider against SWE Bench benchmarks☆72Updated 11 months ago
- Examples and guides for using Swarms Framework☆38Updated 2 months ago
- ☆28Updated last year
- ☆94Updated 8 months ago
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆13Updated last year
- A better way of testing, inspecting, and analyzing AI Agent traces.☆37Updated last week
- Red-Teaming Language Models with DSPy☆195Updated 3 months ago
- ⚡Simplify and optimize the use of LLMs☆25Updated last year
- Simple example of autonomous research ran in parallel from my Aetherius Ai Assistant project. Uses Openai's GPT-3.5, GPT-4, and Microsof…☆17Updated 2 years ago
- AI Agent capable of automating various tasks using MCP☆37Updated 2 months ago
- prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记☆183Updated 3 months ago
- Prompt Builder is a small Python application that implements the principles outlined in the paper "Principled Instructions Are All You Ne…☆32Updated last year
- AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks☆47Updated 2 weeks ago
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆39Updated last year
- ☆109Updated 2 weeks ago
- EvoEval: Evolving Coding Benchmarks via LLM☆72Updated last year
- 👩🏻🔬🧪SciTonic is a highly adaptive technical operator of agents that can produce complexe analyses on technical data with high perfor…☆51Updated last year
- Code interpreter support for o1☆32Updated 8 months ago