DAMO-NLP-SG / multilingual-safety-for-LLMsView external linksLinks
[ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"
☆97Mar 7, 2024Updated last year
Alternatives and similar repositories for multilingual-safety-for-LLMs
Users that are interested in multilingual-safety-for-LLMs are comparing it to the libraries listed below
Sorting:
- ☆28Mar 20, 2024Updated last year
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- ☆19Jun 21, 2025Updated 7 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆34Oct 23, 2024Updated last year
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆152Sep 2, 2025Updated 5 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆338Feb 23, 2024Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- ☆39May 17, 2025Updated 8 months ago
- ☆193Nov 26, 2023Updated 2 years ago
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs☆12Nov 7, 2024Updated last year
- ☆31Aug 9, 2024Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆81Jun 19, 2024Updated last year
- Easy-to-Hard Learning for Information Extraction (ACL 2023 Findings)☆14Jul 11, 2023Updated 2 years ago
- ☆164Sep 2, 2024Updated last year
- ☆696Jul 2, 2025Updated 7 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆106May 20, 2025Updated 8 months ago
- ☆27Oct 6, 2024Updated last year
- ☆52Dec 7, 2025Updated 2 months ago
- Butler 是一个用于自动化服务管理和任务调度的工具项目。☆15Feb 9, 2026Updated last week
- ☆44Oct 1, 2024Updated last year
- ☆47Jul 14, 2024Updated last year
- ☆121Feb 3, 2025Updated last year
- ☆109Feb 16, 2024Updated 2 years ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆65Aug 25, 2024Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 9 months ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆36May 31, 2024Updated last year
- Contrastive Chain-of-Thought Prompting☆68Nov 18, 2023Updated 2 years ago
- Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging. Arxiv, 2024.☆16Oct 28, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated last month
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆314Jun 7, 2024Updated last year
- Jailbreak artifacts for JailbreakBench☆78Nov 6, 2024Updated last year
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆184Apr 1, 2025Updated 10 months ago
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆22Mar 22, 2025Updated 10 months ago
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆18Dec 16, 2024Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- Generated geosite.dat based on Antifilter Community List☆24Feb 8, 2026Updated last week
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 6 months ago