☆19May 14, 2025Updated 9 months ago
Alternatives and similar repositories for Foot-in-the-door-Jailbreak
Users that are interested in Foot-in-the-door-Jailbreak are comparing it to the libraries listed below
Sorting:
- ☆121Feb 3, 2025Updated last year
- Welcome to the official repository for Siren, a project aimed at understanding and mitigating harmful behaviors in large language models …☆15Sep 12, 2025Updated 5 months ago
- code space of paper "Safety Layers in Aligned Large Language Models: The Key to LLM Security" (ICLR 2025)☆21Apr 26, 2025Updated 10 months ago
- [EMNLP 2025] The code repo of paper "X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from Multi-Turn Jailbreaks without Com…☆39Nov 24, 2025Updated 3 months ago
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆22Sep 21, 2025Updated 5 months ago
- ☆21Jul 26, 2025Updated 7 months ago
- Code and data for our paper "Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark"…☆51Jul 11, 2023Updated 2 years ago
- 🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeati…☆70Aug 14, 2025Updated 6 months ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆36Mar 22, 2025Updated 11 months ago
- ☆118Dec 3, 2025Updated 2 months ago
- Code for the AAAI 2023 paper "CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models☆35Apr 18, 2023Updated 2 years ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- ☆56May 21, 2025Updated 9 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Mar 30, 2025Updated 10 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆90May 2, 2025Updated 9 months ago
- A curated list of 150+ papers and resources on Agentic Security. Based on the survey covering the transition from passive LLMs to autonom…☆28Dec 6, 2025Updated 2 months ago
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)☆10Jul 15, 2024Updated last year
- ☆12Oct 29, 2023Updated 2 years ago
- [EMNLP 2024 Findings] Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information☆13Oct 1, 2024Updated last year
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆48Apr 27, 2022Updated 3 years ago
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated 10 months ago
- 这是一个面向币圈新手的入门速通指南集合,包括最全面的币圈区块链资源集合,包含各类工具导航,快速了解币圈常用术语和行话,详细的防骗指南,助你规避各类风险☆19Feb 10, 2026Updated 2 weeks ago
- Code for paper "Concrete Subspace Learning based Interference Elimination for Multi-task Model Fusion"☆14Mar 28, 2024Updated last year
- [NeurIPS 2025@FoRLM] R1-Compress: Long Chain-of-Thought Compression via Chunk Compression and Search☆17Jan 24, 2026Updated last month
- 102.32.01 LuaDev integration for VS Code.☆12Sep 1, 2016Updated 9 years ago
- [NeurIPS 2024 D&B] DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios☆14Nov 19, 2024Updated last year
- Adversarial Attack for Pre-trained Code Models☆10Jul 19, 2022Updated 3 years ago
- [ICML 2025] Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions☆14Sep 27, 2025Updated 5 months ago
- ☆14Feb 26, 2025Updated last year
- Pong for the Nintendo Switch written using libnx☆11May 12, 2018Updated 7 years ago
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- ☆12Jun 22, 2022Updated 3 years ago
- Converts a 3DS program's EXEFS to an (IDA-loadable) ELF☆12Apr 13, 2017Updated 8 years ago
- Official Implementation of implicit reference attack☆11Oct 16, 2024Updated last year
- [WSDM 2026] LookAhead Tuning: Safer Language Models via Partial Answer Previews☆17Dec 14, 2025Updated 2 months ago
- Sourcepawn support in Atom☆11Aug 23, 2016Updated 9 years ago
- ☆11Nov 12, 2024Updated last year