llm-platform-security / chatgpt-plugin-evalLinks
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
☆28Updated last year
Alternatives and similar repositories for chatgpt-plugin-eval
Users that are interested in chatgpt-plugin-eval are comparing it to the libraries listed below
Sorting:
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆97Updated 9 months ago
- ☆67Updated 10 months ago
- ☆25Updated 4 years ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆74Updated 3 months ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆50Updated 3 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆55Updated last year
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆80Updated 2 years ago
- ☆123Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses☆330Updated last week
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- LLM security and privacy☆51Updated last year
- CodexLeaks: Privacy Leaks from Code Generation Language Models in GitHub Copilot☆11Updated 2 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆96Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆45Updated 5 months ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆75Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆51Updated 7 months ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆28Updated 3 years ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆74Updated 2 months ago
- ☆89Updated last year
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆163Updated 6 months ago
- Code used to run the platform for the LLM CTF colocated with SaTML 2024☆27Updated last year
- ☆52Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆153Updated 11 months ago
- Whispers in the Machine: Confidentiality in Agentic Systems☆41Updated this week
- ☆111Updated last year
- Code for the AAAI 2023 paper "CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models☆33Updated 2 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆101Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆340Updated last week
- ☆17Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆194Updated 10 months ago