All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks
☆18Apr 24, 2024Updated last year
Alternatives and similar repositories for simbaja
Users that are interested in simbaja are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆17Apr 15, 2025Updated 11 months ago
- ☆31Jul 14, 2023Updated 2 years ago
- ☆196Nov 26, 2023Updated 2 years ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 6 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆12Dec 22, 2023Updated 2 years ago
- ☆28Mar 20, 2024Updated 2 years ago
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Gene…☆35May 23, 2024Updated last year
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs☆12Nov 7, 2024Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆35Dec 18, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆226Dec 10, 2024Updated last year
- ☆52May 24, 2023Updated 2 years ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 5 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆33Jun 24, 2024Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆67Jun 9, 2025Updated 9 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- Attack AlphaZero Go agents (NeurIPS 2022)☆22Dec 3, 2022Updated 3 years ago
- [ICCVW 2025 (Oral)] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Models☆29Oct 20, 2025Updated 5 months ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- ☆165Sep 2, 2024Updated last year
- ☆39May 21, 2024Updated last year
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆109Dec 28, 2022Updated 3 years ago
- This novel adversarial attack method that I have developed is called CEIA (Contextual Embedding Inversion Attack). This method is a sophi…☆16Nov 19, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 10 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆31Oct 26, 2023Updated 2 years ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Mar 15, 2022Updated 4 years ago
- Code to conduct an embedding attack on LLMs☆31Jan 10, 2025Updated last year
- 2021年暨南大学CTF新生赛题目与源码☆15Dec 6, 2021Updated 4 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Apr 15, 2025Updated 11 months ago
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆30Oct 2, 2025Updated 5 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆88Feb 26, 2025Updated last year
- ☆27Jun 5, 2024Updated last year
- Community Eventing and Scripting examples☆19Aug 11, 2025Updated 7 months ago
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆56Dec 9, 2024Updated last year
- CTF — 学习笔记&比赛题目&WP☆24Dec 9, 2023Updated 2 years ago