☆79Dec 19, 2024Updated last year
Alternatives and similar repositories for PLeak
Users that are interested in PLeak are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆25Jan 17, 2025Updated last year
- ☆135Jul 2, 2024Updated last year
- Official implementation of "Data Mixture Inference: What do BPE tokenizers reveal about their training data?"☆18May 15, 2025Updated 11 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆579Feb 27, 2026Updated 2 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Effective Prompt Extraction from Language Models☆40Sep 10, 2024Updated last year
- A package that achieves 95%+ transfer attack success rate against GPT-4☆26Oct 24, 2024Updated last year
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10May 16, 2022Updated 3 years ago
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆20Sep 1, 2025Updated 8 months ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆33Jun 23, 2025Updated 10 months ago
- Source code for the ACL'2025 paper titled "Unveiling privacy risks in llm agent memory"☆30Dec 2, 2025Updated 5 months ago
- ☆79May 28, 2022Updated 3 years ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆331May 13, 2025Updated 11 months ago
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆36Jul 10, 2025Updated 9 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Code for Voice Jailbreak Attacks Against GPT-4o.☆38May 31, 2024Updated last year
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 7 months ago
- ☆12May 6, 2022Updated 3 years ago
- ☆13Jun 15, 2024Updated last year
- Benchmarking MIAs against LLMs.☆28Oct 8, 2024Updated last year
- This repo contains the codes for the experiments of the paper "AutoPenBench: Benchmarking Generative Agents for Penetration Testing".☆16Oct 28, 2025Updated 6 months ago
- ☆15Mar 3, 2025Updated last year
- ☆31Jan 15, 2026Updated 3 months ago
- ☆14Jun 6, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- Codes for paper "SafeAgentBench: A Benchmark for Safe Task Planning of \\ Embodied LLM Agents"☆69Feb 25, 2025Updated last year
- ☆14Feb 26, 2025Updated last year
- [KDD Explore'24]Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities☆17May 7, 2025Updated 11 months ago
- ☆18Oct 12, 2022Updated 3 years ago
- This is the official repository for the ICLR 2025 accepted paper Badrobot: Manipulating Embodied LLMs in the Physical World.☆43Jun 26, 2025Updated 10 months ago
- ☆22May 23, 2025Updated 11 months ago
- ☆40May 17, 2025Updated 11 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆576Jun 8, 2025Updated 10 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆48Jul 14, 2024Updated last year
- ☆13Dec 8, 2022Updated 3 years ago
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 8 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,638Aug 2, 2024Updated last year
- ☆11Jan 2, 2020Updated 6 years ago
- [ACM MM 2024] ReToMe-VA: Recursive Token Merging for Video Diffusion-based Unrestricted Adversarial Attack☆14Dec 20, 2024Updated last year