☆25Mar 16, 2025Updated last year
Alternatives and similar repositories for JailGuard
Users that are interested in JailGuard are comparing it to the libraries listed below
Sorting:
- ☆76Mar 30, 2025Updated 11 months ago
- Implement of Implicit Knowledge Extraction Attack.☆20May 28, 2025Updated 9 months ago
- Röttger et al. (2025): "MSTS: A Multimodal Safety Test Suite for Vision-Language Models"☆16Mar 31, 2025Updated 11 months ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- ☆11Dec 18, 2024Updated last year
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆57Nov 13, 2023Updated 2 years ago
- ☆21Sep 16, 2024Updated last year
- [AISTATS 2025] Official implementation of "Adversarial Vulnerabilities in Large Language Models for Time Series Forecasting"☆15Apr 30, 2025Updated 10 months ago
- Automatically Update LLM Papers Daily using Github Actions. Ref: https://github.com/Vincentqyw/cv-arxiv-daily☆10Updated this week
- ☆26Mar 17, 2025Updated last year
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 9 months ago
- ☆16Dec 3, 2021Updated 4 years ago
- Data for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder"☆20Oct 26, 2023Updated 2 years ago
- ☆25Jun 17, 2025Updated 9 months ago
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated 11 months ago
- [ICLR 2025] FLAT: LLM Unlearning via Loss Adjustment with Only Forget Data☆14Feb 26, 2025Updated last year
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆72Feb 9, 2026Updated last month
- ☆16Sep 17, 2024Updated last year
- Implementation of FoldMark: Safeguarding Protein Structure Generative Models with Distributional and Evolutionary Watermarking☆24Jul 3, 2025Updated 8 months ago
- Source code of "Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers" EMNLP 2025☆17Jan 12, 2026Updated 2 months ago
- Effective Prompt Extraction from Language Models☆34Sep 10, 2024Updated last year
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆66Oct 27, 2024Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆517Mar 10, 2026Updated last week
- Implementation of paper 'Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing'☆23Jun 9, 2024Updated last year
- ☆23Jan 5, 2026Updated 2 months ago
- This repository contains the implementation for the paper "AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models v…☆58Sep 2, 2024Updated last year
- [AAAI 2025] The official code of the paper "InverseCoder: Unleashing the Power of Instruction-Tuned Code LLMs with Inverse-Instruct"(http…☆14Jul 10, 2024Updated last year
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- ☆122Dec 3, 2025Updated 3 months ago
- Focused Papers, Delivered Simply :)☆52Dec 25, 2025Updated 2 months ago
- ☆31Aug 18, 2025Updated 7 months ago
- ☆10Sep 14, 2023Updated 2 years ago
- ☆26Oct 27, 2025Updated 4 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Jul 9, 2024Updated last year
- BackTime: Backdoor Attacks on Multivariate Time Series Forecasting☆31Apr 14, 2025Updated 11 months ago
- This is the official implementation of our paper 'Black-box Dataset Ownership Verification via Backdoor Watermarking'.☆26Jul 22, 2023Updated 2 years ago
- ☆27Jun 5, 2024Updated last year
- A web app that learns to repair your command line mistakes.☆15Jan 13, 2017Updated 9 years ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago