☆78Dec 19, 2024Updated last year
Alternatives and similar repositories for PLeak
Users that are interested in PLeak are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆25Jan 17, 2025Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆573Feb 27, 2026Updated 3 weeks ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- AI fun☆27Feb 27, 2025Updated last year
- Effective Prompt Extraction from Language Models☆34Sep 10, 2024Updated last year
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10May 16, 2022Updated 3 years ago
- A package that achieves 95%+ transfer attack success rate against GPT-4☆26Oct 24, 2024Updated last year
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆19Sep 1, 2025Updated 6 months ago
- ☆78May 28, 2022Updated 3 years ago
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆36Jul 10, 2025Updated 8 months ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆37May 31, 2024Updated last year
- ☆14Mar 9, 2025Updated last year
- Unofficial Iranian hackers group disk wiper malware aka "Shamoon" in .NET 2.0☆13Dec 23, 2018Updated 7 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 6 months ago
- ☆12May 6, 2022Updated 3 years ago
- ☆13Jun 15, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆34Apr 11, 2024Updated last year
- ☆76Jan 21, 2026Updated 2 months ago
- Benchmarking MIAs against LLMs.☆28Oct 8, 2024Updated last year
- This repo contains the codes for the experiments of the paper "AutoPenBench: Benchmarking Generative Agents for Penetration Testing".☆14Oct 28, 2025Updated 4 months ago
- ☆21Jan 15, 2026Updated 2 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- Codes for paper "SafeAgentBench: A Benchmark for Safe Task Planning of \\ Embodied LLM Agents"☆65Feb 25, 2025Updated last year
- ☆14Feb 26, 2025Updated last year
- [KDD Explore'24]Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities☆17May 7, 2025Updated 10 months ago
- Code for Findings-ACL 2023 paper: Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Rec…☆48Jun 3, 2024Updated last year
- ☆39May 17, 2025Updated 10 months ago
- ☆18Oct 12, 2022Updated 3 years ago
- This is the official repository for the ICLR 2025 accepted paper Badrobot: Manipulating Embodied LLMs in the Physical World.☆42Jun 26, 2025Updated 8 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆567Jun 8, 2025Updated 9 months ago
- ☆21May 23, 2025Updated 10 months ago
- ☆48Jul 14, 2024Updated last year
- ☆13Dec 8, 2022Updated 3 years ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆413Oct 29, 2025Updated 4 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆134Feb 19, 2025Updated last year
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 7 months ago