ydyjya / LLM-IHS-ExplanationView external linksLinks
☆57Jun 13, 2024Updated last year
Alternatives and similar repositories for LLM-IHS-Explanation
Users that are interested in LLM-IHS-Explanation are comparing it to the libraries listed below
Sorting:
- ☆23Jun 13, 2024Updated last year
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,769Feb 1, 2026Updated 2 weeks ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆20Oct 2, 2024Updated last year
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆26Jun 27, 2024Updated last year
- ☆24Apr 20, 2024Updated last year
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆65Oct 27, 2024Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆28Sep 25, 2024Updated last year
- The official GitHub page for paper "NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional St…☆25May 10, 2024Updated last year
- DSN jailbreak Attack & Evaluation Ensemble☆16Feb 7, 2026Updated last week
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- A repo for LLM jailbreak☆14Sep 5, 2023Updated 2 years ago
- Tools for optimizing steering vectors in LLMs.☆19Apr 10, 2025Updated 10 months ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆176May 6, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆36Nov 2, 2024Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆140Feb 21, 2025Updated 11 months ago
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbre…☆52Jul 5, 2025Updated 7 months ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- Code and data repository for "The Mirage of Model Editing: Revisiting Evaluation in the Wild"☆16Aug 27, 2025Updated 5 months ago
- ☆15Jun 11, 2025Updated 8 months ago
- ☆20Jun 16, 2025Updated 7 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,856Jan 24, 2026Updated 3 weeks ago
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- [ICML2025] Official code for "Reinforced Lifelong Editing for Language Models"☆21Feb 23, 2025Updated 11 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Mar 31, 2025Updated 10 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆44Apr 21, 2024Updated last year
- An implementation for MLLM oversensitivity evaluation☆17Nov 16, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆561Jun 8, 2025Updated 8 months ago
- ☆44Oct 1, 2024Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 8 months ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆127Feb 24, 2025Updated 11 months ago
- ☆16May 23, 2023Updated 2 years ago
- ☆28Jul 16, 2024Updated last year
- ☆58Aug 11, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- ☆24Jun 17, 2025Updated 7 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆342Jun 13, 2025Updated 8 months ago