☆58Jun 13, 2024Updated last year
Alternatives and similar repositories for LLM-IHS-Explanation
Users that are interested in LLM-IHS-Explanation are comparing it to the libraries listed below
Sorting:
- ☆23Jun 13, 2024Updated last year
- ☆65Jun 1, 2025Updated 9 months ago
- A resource repository for representation engineering in large language models☆148Nov 14, 2024Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆20Oct 2, 2024Updated last year
- ☆24Apr 20, 2024Updated last year
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆26Jun 27, 2024Updated last year
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆66Oct 27, 2024Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆28Sep 25, 2024Updated last year
- The official GitHub page for paper "NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional St…☆25May 10, 2024Updated last year
- DSN jailbreak Attack & Evaluation Ensemble☆16Feb 7, 2026Updated last month
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- ☆14Feb 24, 2025Updated last year
- A repo for LLM jailbreak☆14Sep 5, 2023Updated 2 years ago
- Tools for optimizing steering vectors in LLMs.☆20Apr 10, 2025Updated 10 months ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆179May 6, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆35Nov 2, 2024Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆140Feb 21, 2025Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆213May 23, 2024Updated last year
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbre…☆53Jul 5, 2025Updated 8 months ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- ☆20Jun 16, 2025Updated 8 months ago
- Code and data repository for "The Mirage of Model Editing: Revisiting Evaluation in the Wild"☆16Aug 27, 2025Updated 6 months ago
- ☆15Updated this week
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 9 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,879Updated this week
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Mar 31, 2025Updated 11 months ago
- [ICML2025] Official code for "Reinforced Lifelong Editing for Language Models"☆21Feb 23, 2025Updated last year
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆44Apr 21, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- ☆25Mar 16, 2025Updated 11 months ago
- An implementation for MLLM oversensitivity evaluation☆17Nov 16, 2024Updated last year
- ☆44Oct 1, 2024Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 8 months ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- ☆16May 23, 2023Updated 2 years ago