☆58Jun 13, 2024Updated last year
Alternatives and similar repositories for LLM-IHS-Explanation
Users that are interested in LLM-IHS-Explanation are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆23Jun 13, 2024Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆20Oct 2, 2024Updated last year
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆66Oct 27, 2024Updated last year
- A resource repository for representation engineering in large language models☆149Nov 14, 2024Updated last year
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆26Jun 27, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- The official GitHub page for paper "NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional St…☆25May 10, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆28Sep 25, 2024Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- DSN jailbreak Attack & Evaluation Ensemble☆17Feb 7, 2026Updated last month
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆181May 6, 2024Updated last year
- [ICML2025] Official code for "Reinforced Lifelong Editing for Language Models"☆21Feb 23, 2025Updated last year
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆29Apr 2, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆35Nov 2, 2024Updated last year
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbre…☆56Jul 5, 2025Updated 8 months ago
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆220May 23, 2024Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 9 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆141Feb 21, 2025Updated last year
- ☆18Mar 30, 2025Updated 11 months ago
- ☆24Apr 20, 2024Updated last year
- ☆14Feb 24, 2025Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,911Mar 16, 2026Updated last week
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 10 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- An implementation for MLLM oversensitivity evaluation☆18Nov 16, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆567Jun 8, 2025Updated 9 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Mar 31, 2025Updated 11 months ago
- ☆18Dec 12, 2025Updated 3 months ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆32Jun 23, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- 【MICCAI 2023 Early Accept & MedIA】EyeMost "Reliable Multimodality Eye Disease Screening via Mixture of Student's t Distributions"☆27Dec 11, 2024Updated last year
- ☆33Jun 24, 2024Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆131Feb 24, 2025Updated last year
- The official github repo for "Training Optimal Large Diffusion Language Models", the first-ever large-scale diffusion language models sca…☆45Nov 6, 2025Updated 4 months ago
- Tools for optimizing steering vectors in LLMs.☆20Apr 10, 2025Updated 11 months ago
- ☆28Jul 16, 2024Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year