☆18Mar 30, 2025Updated 11 months ago
Alternatives and similar repositories for what_features_jailbreak_LLMs
Users that are interested in what_features_jailbreak_LLMs are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆23Jun 13, 2024Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆365Jun 13, 2025Updated 9 months ago
- ☆25Sep 3, 2025Updated 6 months ago
- Tools for optimizing steering vectors in LLMs.☆20Apr 10, 2025Updated 11 months ago
- Kim, J., Evans, J., & Schein, A. (2025). Linear Representations of Political Perspective Emerge in Large Language Models. ICLR.☆25Mar 27, 2025Updated 11 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- A toolkit that provides a range of model diffing techniques including a UI to visualize them interactively.☆69Updated this week
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆31Nov 2, 2025Updated 4 months ago
- [COLING 2025] Official repo of paper: "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jail…☆12Jul 26, 2024Updated last year
- ☆10Jan 20, 2025Updated last year
- Mixture of Lora Experts☆10Apr 7, 2024Updated last year
- Fluent student-teacher redteaming☆23Jul 25, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆170May 2, 2025Updated 10 months ago
- Red Queen Dataset and data generation template☆27Dec 26, 2025Updated 3 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆175Mar 12, 2026Updated 2 weeks ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- ☆27Oct 6, 2024Updated last year
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 7 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- Synthesizing realistic and diverse text-datasets from augmented LLMs☆16Jan 26, 2026Updated 2 months ago
- Exploring Model Kinship for Merging Large Language Models☆28Apr 16, 2025Updated 11 months ago
- Repository of Streaming LLMs☆39Updated this week
- ☆10Nov 6, 2024Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- ☆28Oct 22, 2024Updated last year
- The code for paper "EPO: Entropy-regularized Policy Optimization for LLM Agents Reinforcement Learning"☆37Oct 1, 2025Updated 5 months ago
- Open Source Replication of Anthropic's Alignment Faking Paper☆54Apr 4, 2025Updated 11 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆92May 2, 2025Updated 10 months ago
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated 11 months ago
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- Official code for "Rethinking Chain-of-Thought Reasoning for Videos"☆20Dec 14, 2025Updated 3 months ago
- VideoGPA is a self-supervised framework that enhances 3D consistency in Video Diffusion Models.☆46Mar 16, 2026Updated last week
- ☆16Oct 18, 2023Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- A better way of testing, inspecting, and analyzing AI Agent traces.☆51Jan 12, 2026Updated 2 months ago
- The official repository for the paper entitled "Time Travel in LLMs: Tracing Data Contamination in Large Language Models."☆12Jun 11, 2024Updated last year
- Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"☆13Jul 18, 2024Updated last year
- [CVPR 2026] FocusUI: Efficient UI Grounding via Position-Preserving Visual Token Selection☆27Feb 10, 2026Updated last month
- 北京邮电大学学士学位 论文Word模板☆18Oct 8, 2024Updated last year
- Auditing agents for fine-tuning safety☆20Oct 21, 2025Updated 5 months ago
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆57Nov 13, 2023Updated 2 years ago