Code and dataset for the paper: "Can Editing LLMs Inject Harm?"
☆21Dec 26, 2025Updated 3 months ago
Alternatives and similar repositories for editing-attack
Users that are interested in editing-attack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)☆27Aug 10, 2025Updated 8 months ago
- Can Large Language Models Identify Authorship? (EMNLP 2024 Findings)☆13Feb 4, 2025Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38May 26, 2025Updated 10 months ago
- ☆24Dec 8, 2024Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆30Jul 29, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆14Feb 26, 2025Updated last year
- Traffic accident prediction using graph neural networks "TAP: A Comprehensive Data Repository for Traffic Accident Prediction in Road Net…☆55Oct 24, 2024Updated last year
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆106Nov 9, 2024Updated last year
- ☆23Oct 25, 2024Updated last year
- code space of paper "Safety Layers in Aligned Large Language Models: The Key to LLM Security" (ICLR 2025)☆23Apr 26, 2025Updated 11 months ago
- [NDSS'25] The official implementation of safety misalignment.☆18Jan 8, 2025Updated last year
- ☆16Jul 21, 2022Updated 3 years ago
- ☆37Oct 17, 2024Updated last year
- ☆21Mar 18, 2026Updated 3 weeks ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆58Nov 13, 2023Updated 2 years ago
- [APSIPA ASC 2023] The official code of paper, "FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Au…☆17Mar 7, 2024Updated 2 years ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆70Nov 7, 2025Updated 5 months ago
- Github Repo for ICML 2022 paper: Communication-Efficient Adaptive Federated Learning☆10Nov 18, 2022Updated 3 years ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 9 months ago
- Unofficial implementation of "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection"☆27Jul 6, 2024Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"☆14Nov 17, 2023Updated 2 years ago
- Data for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder"☆20Oct 26, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Code for the NAACL 2024 HCI+NLP Workshop paper "LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tool…☆13Mar 24, 2024Updated 2 years ago
- ☆22Sep 5, 2025Updated 7 months ago
- Re-thinking Federated Active Learning based on Inter-class Diversity (CVPR 2023)☆32May 31, 2023Updated 2 years ago
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆28Apr 9, 2024Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Oct 30, 2023Updated 2 years ago
- 一个机械设计课设的计算器,可以计算出包括电动机,传动装置,V带轮,齿轮,轴,轴承的几何或者力,运动学参数数值。☆18Jan 5, 2023Updated 3 years ago
- ☆32Mar 4, 2022Updated 4 years ago
- ☆10Jul 13, 2024Updated last year
- [FCS'24] LVLM Safety paper☆19Jan 4, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆14Jul 24, 2024Updated last year
- [ICME 2019] Source code and datasets for "Semi-supervised Compatibility Learning Across Categories for Clothing Matching"☆10Apr 26, 2024Updated last year
- Code and datasets for the salesforce AI research paper on prompt leakage and multi-turn threats against LLMs☆22Nov 10, 2025Updated 5 months ago
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆38Apr 23, 2025Updated 11 months ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆43Feb 3, 2026Updated 2 months ago
- [NeurIPS 2024 / ICML 2025] LLM Quantization Attacks☆48Jan 15, 2026Updated 2 months ago