Code and dataset for the paper: "Can Editing LLMs Inject Harm?"
☆21Dec 26, 2025Updated 2 months ago
Alternatives and similar repositories for editing-attack
Users that are interested in editing-attack are comparing it to the libraries listed below
Sorting:
- Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)☆27Aug 10, 2025Updated 6 months ago
- Can Large Language Models Identify Authorship? (EMNLP 2024 Findings)☆12Feb 4, 2025Updated last year
- Paper list for the paper "Authorship Attribution in the Era of Large Language Models: Problems, Methodologies, and Challenges (SIGKDD Exp…☆18Dec 23, 2024Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38May 26, 2025Updated 9 months ago
- ☆23Oct 25, 2024Updated last year
- ☆24Dec 8, 2024Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- code space of paper "Safety Layers in Aligned Large Language Models: The Key to LLM Security" (ICLR 2025)☆21Apr 26, 2025Updated 10 months ago
- Transformer-based model for learning authorship representations.☆47Aug 12, 2024Updated last year
- Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing☆53Dec 17, 2024Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 8 months ago
- [FCS'24] LVLM Safety paper☆19Jan 4, 2025Updated last year
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆80Nov 9, 2024Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆26Dec 21, 2025Updated 2 months ago
- Traffic accident prediction using graph neural networks "TAP: A Comprehensive Data Repository for Traffic Accident Prediction in Road Net…☆55Oct 24, 2024Updated last year
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆37Feb 3, 2026Updated 3 weeks ago
- Data for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder"☆20Oct 26, 2023Updated 2 years ago
- The loss landscape of Large Language Models resemble basin!☆36Jul 8, 2025Updated 7 months ago
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆56Nov 13, 2023Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Oct 30, 2023Updated 2 years ago
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆28Jun 29, 2023Updated 2 years ago
- Unofficial implementation of "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection"☆26Jul 6, 2024Updated last year
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- ☆11Dec 23, 2024Updated last year
- Re-thinking Federated Active Learning based on Inter-class Diversity (CVPR 2023)☆32May 31, 2023Updated 2 years ago
- ☆35Feb 5, 2024Updated 2 years ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing (ICLR 2025)☆43May 18, 2025Updated 9 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago
- Collection of Reverse Engineering in Large Model☆36Jan 8, 2025Updated last year
- ☆12May 6, 2022Updated 3 years ago
- ☆32Mar 4, 2022Updated 3 years ago
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆38Apr 23, 2025Updated 10 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆61Sep 5, 2025Updated 5 months ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated 10 months ago
- 一个机械设计课设的计算器,可以计算出包括电动机,传动装置,V带轮,齿轮,轴,轴承的几何或者力,运动学参数数值。☆18Jan 5, 2023Updated 3 years ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Nov 5, 2024Updated last year