☆37Oct 17, 2024Updated last year
Alternatives and similar repositories for BadEdit
Users that are interested in BadEdit are comparing it to the libraries listed below
Sorting:
- ☆14Dec 12, 2023Updated 2 years ago
- Unofficial implementation of "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection"☆27Jul 6, 2024Updated last year
- This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Lang…☆22Jul 3, 2024Updated last year
- Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing☆14Feb 18, 2021Updated 5 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 6 months ago
- ☆11Feb 21, 2022Updated 4 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆53Jun 2, 2025Updated 9 months ago
- Code and dataset for the paper: "Can Editing LLMs Inject Harm?"☆21Dec 26, 2025Updated 2 months ago
- ☆24Nov 19, 2024Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆20Aug 10, 2024Updated last year
- 针对大模型的后门攻击☆12Jun 30, 2024Updated last year
- A toolbox for backdoor attacks.☆23Jan 13, 2023Updated 3 years ago
- ☆25Jun 16, 2024Updated last year
- Composite Backdoor Attacks Against Large Language Models☆23Apr 12, 2024Updated last year
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- ☆18Aug 15, 2022Updated 3 years ago
- ☆21May 23, 2025Updated 9 months ago
- [NeurIPS 2025] Mask Image Watermarking (Official Implementation)☆45Nov 9, 2025Updated 4 months ago
- ☆72Feb 16, 2025Updated last year
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆29Feb 8, 2021Updated 5 years ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆60Jan 15, 2025Updated last year
- A list of recent adversarial attack and defense papers (including those on large language models)☆45Jan 25, 2026Updated last month
- ☆19Mar 9, 2024Updated 2 years ago
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆42Jul 8, 2024Updated last year
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10May 16, 2022Updated 3 years ago
- 华中科技大学网络安全课程设计-Linux下的状态检测防火墙☆11Oct 17, 2022Updated 3 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- ☆14May 28, 2024Updated last year
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆43Sep 9, 2025Updated 6 months ago
- ☆14May 8, 2024Updated last year
- Example TrojAI Submission☆27Dec 6, 2024Updated last year
- No description yet☆11May 26, 2023Updated 2 years ago
- ☆13Oct 20, 2022Updated 3 years ago
- ☆20Jan 6, 2025Updated last year
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆50Jul 24, 2024Updated last year
- ☆12Mar 5, 2024Updated 2 years ago
- Welcome to the official repository for Siren, a project aimed at understanding and mitigating harmful behaviors in large language models …☆15Sep 12, 2025Updated 6 months ago
- VioHawk: Detecting Traffic Violations of Autonomous Driving Systems through Criticality-guided Simulation Testing☆15Aug 5, 2024Updated last year