arekusandr / last_layerLinks
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
☆124Updated last year
Alternatives and similar repositories for last_layer
Users that are interested in last_layer are comparing it to the libraries listed below
Sorting:
- This Python package simplifies generating documentation for functions and methods in designated modules or libraries. It enables effortle…☆60Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆452Updated 2 years ago
- OpenShield is a new generation security layer for AI models☆84Updated last week
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆50Updated 11 months ago
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆41Updated last year
- Chat strategies for LLMs☆129Updated 3 weeks ago
- Data Encoding and Representation Analysis☆40Updated 2 years ago
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆21Updated last month
- Guardrails for secure and robust agent development☆385Updated 3 weeks ago
- Generalist Software Agents to Solve Soware Engineering Tasks☆234Updated last year
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆120Updated 2 years ago
- Deepmark AI enables a unique testing environment for language models (LLM) assessment on task-specific metrics and on your own data so yo…☆104Updated 2 years ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆622Updated 2 weeks ago
- Red-Teaming Language Models with DSPy☆250Updated 11 months ago
- AIShield Watchtower: Dive Deep into AI's Secrets! 🔍 Open-source tool by AIShield for AI model insights & vulnerability scans. Secure you…☆200Updated last week
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆100Updated 9 months ago
- Structured Output Is All You Need!☆59Updated last year
- The fastest Trust Layer for AI Agents☆152Updated this week
- A python implementation of priompt - a neat way of managing context from diverse sources for LLM applications.☆115Updated 6 months ago
- AI-to-AI Testing | Simulation framework for LLM-based applications☆136Updated 2 years ago
- VerifAI initiative to build open-source easy-to-deploy generative question-answering engine that can reference and verify answers for cor…☆76Updated 4 months ago
- Code scanner to check for issues in prompts and LLM calls☆76Updated 10 months ago
- Gateway and load balancer to your LLM inference endpoints☆25Updated last year
- GuardRail: Advanced tool for data analysis and AI content generation using OpenAI GPT models. Features sentiment analysis, content classi…☆140Updated 2 years ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆201Updated 4 months ago
- ☆297Updated 10 months ago
- Curation of prompts that are known to be adversarial to large language models☆188Updated 2 years ago
- A Python package for zero-shot text anonymization using Transformer-based NER models.☆81Updated last month
- Uses the ChatGPT model to determine if a user-supplied question is safe and filter out dangerous questions☆49Updated 2 years ago
- Every practical and proposed defense against prompt injection.☆630Updated 11 months ago