Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting
☆20Mar 25, 2024Updated last year
Alternatives and similar repositories for SafeLLM_with_IntentionAnalysis
Users that are interested in SafeLLM_with_IntentionAnalysis are comparing it to the libraries listed below
Sorting:
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆34Apr 11, 2024Updated last year
- The implementation for our paper, "Improving Simultaneous Machine Translation with Monolingual Data," accepted to AAAI 2023. 🎉☆12Jul 19, 2023Updated 2 years ago
- Source code of ACL 2023 Main Conference Paper "PAD-Net: An Efficient Framework for Dynamic Networks".☆11Updated this week
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- ☆28Mar 20, 2024Updated last year
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆22Mar 22, 2025Updated 11 months ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 8 months ago
- An easy-to-use Python framework to defend against jailbreak prompts.☆21Mar 22, 2025Updated 11 months ago
- Code for the CVPR 2020 article "Adversarial Vertex mixup: Toward Better Adversarially Robust Generalization"☆13Jul 13, 2020Updated 5 years ago
- ☆196Nov 26, 2023Updated 2 years ago
- ☆24Jun 17, 2025Updated 8 months ago
- Code for Retrieval-Augmented Perception (ICML 2025)☆68Aug 10, 2025Updated 6 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆163Nov 30, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- ☆121Feb 3, 2025Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- PyTorch Implementation of "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Larg…☆40Dec 5, 2025Updated 2 months ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆31Oct 26, 2023Updated 2 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆72May 22, 2025Updated 9 months ago
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆34Aug 16, 2024Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 5 months ago
- ☆12May 6, 2022Updated 3 years ago
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆30Oct 2, 2025Updated 5 months ago
- Math24o: 高中奥 林匹克数学竞赛测评集 High School Olympiad Mathematics Chinese Benchmark☆11Mar 27, 2025Updated 11 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆99Mar 7, 2024Updated last year
- [MQM-APE] Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators.☆11Sep 24, 2024Updated last year
- Code for FAB-attack☆34Jul 10, 2020Updated 5 years ago
- Code/Models for Defending Against Universal Attacks Through Selective Feature Regeneration, CVPR 2020☆10Jul 31, 2020Updated 5 years ago
- ☆12Dec 22, 2025Updated 2 months ago
- Tool for testing IPv4 and IPv6 DHCP services☆13Mar 27, 2020Updated 5 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆179May 6, 2024Updated last year
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆108Dec 2, 2024Updated last year
- ☆39May 21, 2024Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆44Feb 18, 2026Updated last week
- ☆20Feb 3, 2025Updated last year