☆79Dec 19, 2024Updated last year
Alternatives and similar repositories for PLeak
Users that are interested in PLeak are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆25Jan 17, 2025Updated last year
- ☆129Jul 2, 2024Updated last year
- Official implementation of "Data Mixture Inference: What do BPE tokenizers reveal about their training data?"☆18May 15, 2025Updated 10 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆576Feb 27, 2026Updated last month
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- AI fun☆28Feb 27, 2025Updated last year
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10May 16, 2022Updated 3 years ago
- A package that achieves 95%+ transfer attack success rate against GPT-4☆26Oct 24, 2024Updated last year
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆32Jun 23, 2025Updated 9 months ago
- Source code for the ACL'2025 paper titled "Unveiling privacy risks in llm agent memory"☆29Dec 2, 2025Updated 4 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆324May 13, 2025Updated 11 months ago
- ☆78May 28, 2022Updated 3 years ago
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆37Jul 10, 2025Updated 9 months ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆38May 31, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆14Mar 9, 2025Updated last year
- ☆12May 6, 2022Updated 3 years ago
- ☆13Jun 15, 2024Updated last year
- AI Security Research☆16Jun 21, 2023Updated 2 years ago
- Benchmarking MIAs against LLMs.☆28Oct 8, 2024Updated last year
- This repo contains the codes for the experiments of the paper "AutoPenBench: Benchmarking Generative Agents for Penetration Testing".☆14Oct 28, 2025Updated 5 months ago
- ☆14Mar 3, 2025Updated last year
- ☆27Jan 15, 2026Updated 2 months ago
- ☆14Jun 6, 2023Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- Codes for paper "SafeAgentBench: A Benchmark for Safe Task Planning of \\ Embodied LLM Agents"☆68Feb 25, 2025Updated last year
- [KDD Explore'24]Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities☆17May 7, 2025Updated 11 months ago
- Code for Findings-ACL 2023 paper: Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Rec…☆46Jun 3, 2024Updated last year
- ☆39May 17, 2025Updated 10 months ago
- ☆18Oct 12, 2022Updated 3 years ago
- ☆21May 23, 2025Updated 10 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆571Jun 8, 2025Updated 10 months ago
- ☆48Jul 14, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆13Dec 8, 2022Updated 3 years ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆426Oct 29, 2025Updated 5 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆136Feb 19, 2025Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 7 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,601Aug 2, 2024Updated last year