[NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models
β287Mar 13, 2026Updated 3 weeks ago
Alternatives and similar repositories for BackdoorLLM
Users that are interested in BackdoorLLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π₯π₯π₯ Detecting hidden backdoors in Large Language Models with only black-box accessβ55Jun 2, 2025Updated 10 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]β112Sep 27, 2024Updated last year
- β26Aug 21, 2024Updated last year
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Modelsβ50Jul 24, 2024Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107β21Aug 10, 2024Updated last year
- NordVPN Threat Protection Proβ’ β’ AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- Safety at Scale: A Comprehensive Survey of Large Model and Agent Safetyβ255Mar 18, 2026Updated 3 weeks ago
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacksβ31Nov 2, 2025Updated 5 months ago
- β14Feb 26, 2025Updated last year
- ICL backdoor attackβ17Nov 4, 2024Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Modelsβ61Apr 8, 2024Updated 2 years ago
- A survey on harmful fine-tuning attack for large language model (ACM CSUR)β238Feb 25, 2026Updated last month
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIPβ43Feb 3, 2026Updated 2 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"β61Jan 15, 2025Updated last year
- β37Oct 17, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretrainingβ19Feb 26, 2025Updated last year
- Composite Backdoor Attacks Against Large Language Modelsβ25Apr 12, 2024Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspecβ¦β23Oct 30, 2023Updated 2 years ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to poteβ¦β204Oct 5, 2025Updated 6 months ago
- Code for paper "Membership Inference Attacks Against Vision-Language Models"β28Jan 25, 2025Updated last year
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"β211Apr 12, 2025Updated last year
- This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calcβ¦β12Feb 14, 2023Updated 3 years ago
- β594Jul 4, 2025Updated 9 months ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''β20Aug 9, 2023Updated 2 years ago
- End-to-end encrypted email - Proton Mail β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)β27Nov 18, 2024Updated last year
- Code for ICCV2025 paperββIDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselvesβ17Jul 11, 2025Updated 9 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)β161Nov 30, 2024Updated last year
- β27Feb 19, 2025Updated last year
- β75Mar 30, 2025Updated last year
- Code for Transferable Unlearnable Examplesβ22Mar 11, 2023Updated 3 years ago
- A compact toolbox for backdoor attacks and defenses.β191Jul 16, 2024Updated last year
- A collection of publications that works on code models but beyond focusing on the accuracies.β13Jun 30, 2023Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Imageβ36Oct 29, 2025Updated 5 months ago
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).β1,926Apr 2, 2026Updated last week
- Implement of Implicit Knowledge Extraction Attack.β21May 28, 2025Updated 10 months ago
- Open-source red teaming framework for MLLMs with 42+ attack methodsβ241Mar 25, 2026Updated 2 weeks ago
- Code for the paper "AICrypto: A Comprehensive Benchmark for Evaluating Cryptography Capabilities of Large Language Models"β30Sep 27, 2025Updated 6 months ago
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)β10Jul 15, 2024Updated last year
- The open-sourced Python toolbox for backdoor attacks and defenses.β657Sep 27, 2025Updated 6 months ago
- The evaluation code for A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5β53Jan 18, 2026Updated 2 months ago