opaque-systems / opaquegateway-pythonLinks
A privacy layer around LLMs
☆31Updated last year
Alternatives and similar repositories for opaquegateway-python
Users that are interested in opaquegateway-python are comparing it to the libraries listed below
Sorting:
- ☆16Updated last year
- A demo chatbot that uses the OpaquePrompts API☆17Updated last year
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆97Updated 10 months ago
- ☆101Updated last year
- ☆44Updated 2 years ago
- LLM security and privacy☆48Updated 8 months ago
- Official implementation of the WASP web agent security benchmark☆23Updated last month
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆51Updated 2 months ago
- ☆26Updated last year
- A toolkit to assess data privacy in LLMs (under development)☆58Updated 5 months ago
- Code for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-…☆41Updated 3 years ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆69Updated last year
- TabLeak: Tabular Data Leakage in Federated Learning☆15Updated 11 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆80Updated 9 months ago
- ☆57Updated last year
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆24Updated last month
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 11 months ago
- ☆25Updated 8 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆51Updated 10 months ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆43Updated last year
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆134Updated 11 months ago
- Security Attacks on LLM-based Code Completion Tools (AAAI 2025)☆19Updated last month
- Code for Findings of ACL 2021 "Differential Privacy for Text Analytics via Natural Text Sanitization"☆28Updated 3 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆61Updated last month
- Supply chain security for ML☆167Updated last week
- Code release for MPCViT accepted by ICCV 2023☆16Updated 5 months ago
- (ICLR 2023 Spotlight) MPCFormer: fast, performant, and private transformer inference with MPC☆97Updated 2 years ago
- ☆44Updated 4 months ago