Code and dataset for the paper: "Can Editing LLMs Inject Harm?"
☆21Dec 26, 2025Updated 2 months ago
Alternatives and similar repositories for editing-attack
Users that are interested in editing-attack are comparing it to the libraries listed below
Sorting:
- Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)☆27Aug 10, 2025Updated 7 months ago
- Can Large Language Models Identify Authorship? (EMNLP 2024 Findings)☆12Feb 4, 2025Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38May 26, 2025Updated 9 months ago
- ☆24Dec 8, 2024Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- ☆14Feb 26, 2025Updated last year
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆106Nov 9, 2024Updated last year
- ☆23Oct 25, 2024Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆27Dec 21, 2025Updated 3 months ago
- code space of paper "Safety Layers in Aligned Large Language Models: The Key to LLM Security" (ICLR 2025)☆22Apr 26, 2025Updated 10 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆81Nov 9, 2024Updated last year
- ☆37Oct 17, 2024Updated last year
- Precision Knowledge Editing (PKE): A novel method to reduce toxicity in LLMs while preserving performance, with robust evaluations and ha…☆11Nov 26, 2024Updated last year
- Transformer-based model for learning authorship representations.☆47Aug 12, 2024Updated last year
- [APSIPA ASC 2023] The official code of paper, "FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Au…☆17Mar 7, 2024Updated 2 years ago
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- 📜 Paper list on decoding methods for LLMs and LVLMs☆70Nov 7, 2025Updated 4 months ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 9 months ago
- Unofficial implementation of "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection"☆27Jul 6, 2024Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"☆14Nov 17, 2023Updated 2 years ago
- Data for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder"☆20Oct 26, 2023Updated 2 years ago
- Code for the NAACL 2024 HCI+NLP Workshop paper "LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tool…☆13Mar 24, 2024Updated last year
- ☆22Sep 5, 2025Updated 6 months ago
- Re-thinking Federated Active Learning based on Inter-class Diversity (CVPR 2023)☆32May 31, 2023Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Oct 30, 2023Updated 2 years ago
- [FCS'24] LVLM Safety paper☆19Jan 4, 2025Updated last year
- [ICME 2019] Source code and datasets for "Semi-supervised Compatibility Learning Across Categories for Clothing Matching"☆10Apr 26, 2024Updated last year
- Code and datasets for the salesforce AI research paper on prompt leakage and multi-turn threats against LLMs☆21Nov 10, 2025Updated 4 months ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- MEDPlat brings the simplicity and power of low-code technologies to healthcare implementations. MEDplat is a point-of-care application th…☆15Feb 19, 2026Updated last month
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆40Feb 3, 2026Updated last month
- ☆13Dec 28, 2024Updated last year
- [NeurIPS 2024 / ICML 2025] LLM Quantization Attacks☆49Jan 15, 2026Updated 2 months ago
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆269May 13, 2024Updated last year
- ☆16Nov 8, 2024Updated last year
- 本项目采用Firefly模型训练框架,使用LLAMA-2模型对多项选择阅读理解任务(Multiple Choice MRC)进行微调,取得了显著的进步。☆11Sep 16, 2023Updated 2 years ago