arekusandr / last_layerLinks
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
☆116Updated 10 months ago
Alternatives and similar repositories for last_layer
Users that are interested in last_layer are comparing it to the libraries listed below
Sorting:
- AI-driven Threat modeling-as-a-Code (TaaC-AI)☆134Updated 11 months ago
- This Python package simplifies generating documentation for functions and methods in designated modules or libraries. It enables effortle…☆60Updated last year
- A benchmark for prompt injection detection systems.☆115Updated 3 weeks ago
- The fastest Trust Layer for AI Agents☆136Updated last week
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆313Updated 4 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆37Updated last week
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆168Updated 2 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆374Updated last year
- Enriched Python function call graphs for agents and coding assistants☆96Updated last week
- Chat strategies for LLMs☆95Updated 9 months ago
- Data Encoding and Representation Analysis☆40Updated last year
- Every practical and proposed defense against prompt injection.☆472Updated 3 months ago
- Modular, open source LLMOps stack that separates concerns: LiteLLM unifies LLM APIs, manages routing and cost controls, and ensures high-…☆99Updated 3 months ago
- Dropbox LLM Security research code and results☆228Updated last year
- Leveraging DSPy for AI-driven task understanding and solution generation, the Self-Discover Framework automates problem-solving through r…☆60Updated 10 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆56Updated 2 months ago
- Red-Teaming Language Models with DSPy☆193Updated 3 months ago
- Framework for LLM evaluation, guardrails and security☆112Updated 8 months ago
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆117Updated 2 years ago
- Logging and caching superpowers for the openai sdk☆105Updated last year
- Project LLM Verification Standard☆44Updated 3 weeks ago
- Guardrails for secure and robust agent development☆285Updated 3 weeks ago
- Tools for LLM agents.☆63Updated 5 months ago
- Self-hardening firewall for large language models☆265Updated last year
- A Ruby on Rails style framework for the DSPy (Demonstrate, Search, Predict) project for Language Models like GPT, BERT, and LLama.☆125Updated 7 months ago
- Security and compliance proxy for LLM APIs☆47Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆389Updated last year
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆23Updated 7 months ago
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆67Updated last year
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆12Updated 3 months ago