Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"
☆66Oct 27, 2024Updated last year
Alternatives and similar repositories for GradSafe
Users that are interested in GradSafe are comparing it to the libraries listed below
Sorting:
- ☆13Jun 4, 2025Updated 9 months ago
- Official code for ICCV2023 paper: Learning Unified Decompositional and Compositional NeRF for Editable Novel View Synthesis☆34Dec 27, 2023Updated 2 years ago
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆56Nov 13, 2023Updated 2 years ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆62Aug 23, 2024Updated last year
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Jul 9, 2024Updated last year
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆45Apr 21, 2024Updated last year
- This is the official repo of the paper "Latent Guard: a Safety Framework for Text-to-image Generation"☆53Oct 24, 2024Updated last year
- The official code for ICML 2024 "FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Recons…☆29Jun 6, 2024Updated last year
- Official Repository of Personalized Visual Instruct Tuning☆34Mar 6, 2025Updated last year
- ☆20May 26, 2020Updated 5 years ago
- ☆27Apr 11, 2023Updated 2 years ago
- ☆58Jun 13, 2024Updated last year
- ☆124Feb 3, 2025Updated last year
- ☆10Oct 31, 2022Updated 3 years ago
- Implementation for "RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content"☆23Jul 28, 2024Updated last year
- Implementation of paper 'Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing'☆23Jun 9, 2024Updated last year
- ☆25Mar 16, 2025Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 10 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- ☆40May 17, 2025Updated 10 months ago
- Official code for CVPR 2026 paper: VGGT-Det: Mining VGGT Internal Priors for Sensor-Geometry-Free Multi-View Indoor 3D Object Detection☆84Mar 3, 2026Updated 2 weeks ago
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆77Mar 1, 2025Updated last year
- [ACL24] Official Repo of Paper `ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs`☆94Aug 15, 2025Updated 7 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- Official implementation repository for the paper Towards General Conceptual Model Editing via Adversarial Representation Engineering.☆19Dec 6, 2024Updated last year
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- Code of our CVPR2024 paper - DiffusionMTL: Learning Multi-Task Denoising Diffusion Model from Partially Annotated Data☆59Mar 25, 2024Updated last year
- Official code for ECCV2024 paper: GScream: Learning 3D Geometry and Feature Consistent Gaussian Splatting for Object Removal☆104Nov 25, 2025Updated 3 months ago
- LLM 安全资料收集与学习☆25Jun 28, 2024Updated last year
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆36Oct 15, 2023Updated 2 years ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆72May 22, 2025Updated 9 months ago
- Official github repo of G-LLaVA☆148Feb 20, 2025Updated last year
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Dec 1, 2024Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- ☆48Feb 8, 2025Updated last year
- ☆44Oct 1, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆62Aug 8, 2024Updated last year