llm-platform-security / chatgpt-plugin-evalLinks
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
β29Updated last year
Alternatives and similar repositories for chatgpt-plugin-eval
Users that are interested in chatgpt-plugin-eval are comparing it to the libraries listed below
Sorting:
- π₯π₯π₯ Detecting hidden backdoors in Large Language Models with only black-box accessβ50Updated 6 months ago
- β73Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systemsβ100Updated 10 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.β176Updated 8 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)β156Updated last year
- β25Updated 4 years ago
- Code for Findings-ACL 2023 paper: Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recβ¦β48Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"β76Updated 4 months ago
- Whispers in the Machine: Confidentiality in Agentic Systemsβ41Updated last week
- A curated list of trustworthy Generative AI papers. Daily updating...β75Updated last year
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queriesβ55Updated last month
- LLM security and privacyβ52Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Modelsβ55Updated last year
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word predβ¦β101Updated last year
- A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020β37Updated 3 years ago
- β19Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakersβ66Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]β102Updated last year
- β99Updated last year
- β114Updated 2 years ago
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"β181Updated 8 months ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMsβ361Updated last month
- β124Updated last year
- β68Updated 5 years ago
- Machine Learning & Security Seminar @Purdue Universityβ25Updated 2 years ago
- Agent Security Bench (ASB)β155Updated last month
- Code release for DeepJudge (S&P'22)β52Updated 2 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant codeβ68Updated last year
- β150Updated last year
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdfβ28Updated 4 years ago