GPTSafe / PromptGuardLinks
Build production ready apps for GPT using Node.js & TypeScript
☆46Updated 2 years ago
Alternatives and similar repositories for PromptGuard
Users that are interested in PromptGuard are comparing it to the libraries listed below
Sorting:
- Security and compliance proxy for LLM APIs☆50Updated 2 years ago
- Repo with random useful scripts, utilities, prompts and stuff☆194Updated 3 weeks ago
- The fastest Trust Layer for AI Agents☆146Updated 6 months ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆58Updated last year
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆120Updated 2 years ago
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆45Updated 9 months ago
- My attempt at making a GPT agent for pentesting☆40Updated 2 years ago
- Dropbox LLM Security research code and results☆250Updated last year
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆32Updated 11 months ago
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming work☆157Updated 2 months ago
- Programmatic, CLI, and MCP access to Granola.ai data.☆25Updated 5 months ago
- ⚡Simplify and optimize the use of LLMs☆52Updated 3 months ago
- MCP security wrapper☆205Updated 2 weeks ago
- R.A.Y.D.E.R revolutionizes security testing for generative AI by letting you test chatbots directly through their web interfaces. No API …☆14Updated 4 months ago
- ☆44Updated 3 years ago
- Autospec is an open-source AI agent that takes a web app URL and autonomously QAs it, and saves its passing specs as E2E test code☆56Updated 10 months ago
- My inputs for the LLM Gandalf made by Lakera☆48Updated 2 years ago
- Payloads for Attacking Large Language Models☆114Updated 6 months ago
- OpenShield is a new generation security layer for AI models☆83Updated last week
- Practical Jupyter notebooks from Andrew Ng and Giskard team's "Red Teaming LLM Applications" course on DeepLearning.AI.☆22Updated last year
- A powerful MCP (Model Context Protocol) Server that audits npm package dependencies for security vulnerabilities. Built with remote npm r…☆49Updated 5 months ago
- AI Assistant that can get stock prices☆46Updated 2 years ago
- GPT-Analyst: A GPT for GPT analysis and reverse engineering☆203Updated last year
- A minimal TypeScript library with research informed prompt injection attacks.☆51Updated 3 months ago
- A flexible framework for security teams to build and deploy AI-powered workflows that complement their existing security operations.☆145Updated last week
- Framework for LLM evaluation, guardrails and security☆114Updated last year
- A simple worker for extracting page content for a given URL☆126Updated last year
- ☆50Updated last week
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆20Updated last week
- Search the common crawl using lambda functions☆94Updated 6 years ago