Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting
☆20Mar 25, 2024Updated last year
Alternatives and similar repositories for SafeLLM_with_IntentionAnalysis
Users that are interested in SafeLLM_with_IntentionAnalysis are comparing it to the libraries listed below
Sorting:
- Red Queen Dataset and data generation template☆27Dec 26, 2025Updated 2 months ago
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆34Apr 11, 2024Updated last year
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆22Mar 22, 2025Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 9 months ago
- ☆197Nov 26, 2023Updated 2 years ago
- Source code of ACL 2023 Main Conference Paper "PAD-Net: An Efficient Framework for Dynamic Networks".☆11Feb 28, 2026Updated 3 weeks ago
- The implementation for our paper, "Improving Simultaneous Machine Translation with Monolingual Data," accepted to AAAI 2023. 🎉☆12Jul 19, 2023Updated 2 years ago
- ☆28Mar 20, 2024Updated 2 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆72May 22, 2025Updated 10 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- ☆25Jun 17, 2025Updated 9 months ago
- Implementation for EACL 2024 paper "Corpus-Steered Query Expansion with Large Language Models"☆12Mar 19, 2024Updated 2 years ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- ☆60Mar 9, 2023Updated 3 years ago
- ☆124Feb 3, 2025Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆31Oct 26, 2023Updated 2 years ago
- Implementation for ACL 2024 paper "Meta-Task Prompting Elicits Embeddings from Large Language Models"☆12Jul 25, 2024Updated last year
- ☆39May 21, 2024Updated last year
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated 11 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆110Aug 7, 2024Updated last year
- Pytorch Code for FedHyper☆11Aug 28, 2024Updated last year
- Code for the CVPR 2020 article "Adversarial Vertex mixup: Toward Better Adversarially Robust Generalization"☆13Jul 13, 2020Updated 5 years ago
- [ISSTA'24] A Large-Scale Dataset Capable of Enhancing the Prowess of Large Language Models for Program Testing☆12Jan 7, 2025Updated last year
- Anderson points-to analysis implementation based on LLVM☆12Jan 3, 2021Updated 5 years ago
- Correlate NVD datasets wIth CWE/CAPEC/CVSS labels for customised usage. Plus static analysis and data visualisation.☆13Nov 17, 2023Updated 2 years ago
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆30Oct 2, 2025Updated 5 months ago
- Math24o: 高中奥林匹克数学竞赛测评集 High School Olympiad Mathematics Chinese Benchmark☆11Mar 27, 2025Updated 11 months ago
- Implementation of paper 'Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing'☆23Jun 9, 2024Updated last year
- [ICSE 2023] Differentiable interpretation and failure-inducing input generation for neural network numerical bugs.☆13Jan 5, 2024Updated 2 years ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- The official implementation of the paper "Rethinking Pruning for Vision-Language Models: Strategies for Effective Sparsity".☆15Jul 2, 2024Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆21May 2, 2024Updated last year
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆113Oct 11, 2024Updated last year
- [Tool] AutoRec (2015) PyTorch Implementation☆10Mar 1, 2020Updated 6 years ago
- Implementation for "RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content"☆23Jul 28, 2024Updated last year
- ☆40Aug 10, 2024Updated last year