opaque-systems / opaquegateway-pythonLinks
A privacy layer around LLMs
☆31Updated last year
Alternatives and similar repositories for opaquegateway-python
Users that are interested in opaquegateway-python are comparing it to the libraries listed below
Sorting:
- ☆16Updated last year
- A demo chatbot that uses the OpaquePrompts API☆17Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated last year
- ☆97Updated last year
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆96Updated 9 months ago
- ☆26Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆133Updated 10 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆48Updated 2 months ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆38Updated this week
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆43Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆175Updated this week
- Modular framework for property inference attacks on deep neural networks☆15Updated last year
- (ICLR 2023 Spotlight) MPCFormer: fast, performant, and private transformer inference with MPC☆97Updated last year
- Fault-aware neural code rankers☆28Updated 2 years ago
- A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020☆35Updated 2 years ago
- DP-FTRL from "Practical and Private (Deep) Learning without Sampling or Shuffling" for centralized training.☆29Updated last week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆110Updated last year
- Flow Integrity Deterministic Enforcement System. Mechanisms for securing AI agents with information-flow control.☆21Updated last week
- Dataset for the Tensor Trust project☆40Updated last year
- LLM security and privacy☆49Updated 7 months ago
- ☆34Updated 6 months ago
- Differentially-private transformers using HuggingFace and Opacus☆140Updated 9 months ago
- ☆40Updated 2 months ago
- ☆44Updated 2 years ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆69Updated last year
- Private Evolution: Generating DP Synthetic Data without Training [ICLR 2024, ICML 2024 Spotlight]☆97Updated last week
- [ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text☆40Updated 4 months ago
- Privacy Testing for Deep Learning☆205Updated last year
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆41Updated 4 months ago