opaque-systems / opaquegateway-pythonLinks
A privacy layer around LLMs
☆33Updated last year
Alternatives and similar repositories for opaquegateway-python
Users that are interested in opaquegateway-python are comparing it to the libraries listed below
Sorting:
- ☆16Updated last year
- Flow Integrity Deterministic Enforcement System. Mechanisms for securing AI agents with information-flow control.☆74Updated 8 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆103Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆33Updated last year
- Supply chain security for ML☆218Updated last week
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆103Updated last year
- ☆33Updated 4 months ago
- A demo chatbot that uses the OpaquePrompts API☆17Updated 2 years ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆79Updated 5 months ago
- Examples scripts that showcase how to use Private AI Text to de-identify, redact, hash, tokenize, mask and synthesize PII in text.☆85Updated last month
- An LLM leaderboard for stateful agents☆20Updated 3 months ago
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆64Updated 2 years ago
- ☆54Updated 10 months ago
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins☆29Updated last year
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆71Updated 8 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆232Updated 7 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆103Updated last year
- A community wiki for all things AI/ML bill of materials (MLBOM, AIBOM) and transparency into AI/ML models.☆44Updated last year
- This is the official code for the paper "Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation"☆53Updated last year
- ☆34Updated last year
- (ICLR 2023 Spotlight) MPCFormer: fast, performant, and private transformer inference with MPC☆102Updated 2 years ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆83Updated 6 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆13Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆152Updated last year
- Universal Robustness Evaluation Toolkit (for Evasion)☆32Updated 4 months ago
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆29Updated last year
- Run SWE-bench evaluations remotely☆51Updated 5 months ago
- Security Attacks on LLM-based Code Completion Tools (AAAI 2025)☆21Updated last month
- ☆120Updated 2 years ago