☆34Nov 12, 2024Updated last year
Alternatives and similar repositories for rapidresponsebench
Users that are interested in rapidresponsebench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 9 months ago
- Red Queen Dataset and data generation template☆27Dec 26, 2025Updated 2 months ago
- ☆25Sep 3, 2025Updated 6 months ago
- ☆15Jul 24, 2022Updated 3 years ago
- Example agents for the Dreadnode platform☆24Dec 19, 2025Updated 3 months ago
- ☆36May 21, 2025Updated 10 months ago
- ☆31Sep 23, 2024Updated last year
- Code for the API, workload execution, and agents underlying the LLMail-Inject Adpative Prompt Injection Challenge☆21Mar 1, 2026Updated 3 weeks ago
- Accompanying codebase for neuroscope.io, a website for displaying max activating dataset examples for language model neurons☆13Feb 13, 2023Updated 3 years ago
- ☆20Jun 16, 2025Updated 9 months ago
- ☆13Sep 12, 2024Updated last year
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆79Jan 23, 2025Updated last year
- Code Repository for Blog - How to Productionize Large Language Models (LLMs)☆12Mar 27, 2024Updated last year
- ☆12Sep 29, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- CVPR'19 experiments with (on-manifold) adversarial examples.☆43Feb 27, 2020Updated 6 years ago
- ☆197Nov 26, 2023Updated 2 years ago
- Open Source Replication of Anthropic's Alignment Faking Paper☆54Apr 4, 2025Updated 11 months ago
- Code for the paper "On the Adversarial Robustness of Visual Transformers"☆59Nov 18, 2021Updated 4 years ago
- ☆10Oct 11, 2022Updated 3 years ago
- A tiny easily hackable implementation of a feature dashboard.☆16Oct 21, 2025Updated 5 months ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated 11 months ago
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆27Dec 23, 2020Updated 5 years ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆321May 13, 2025Updated 10 months ago
- ☆124Feb 3, 2025Updated last year
- Material for the series of seminars on Large Language Models☆34Apr 21, 2024Updated last year
- ☆12Apr 25, 2025Updated 10 months ago
- Auditing agents for fine-tuning safety☆20Oct 21, 2025Updated 5 months ago
- An intelligent agent utilizing Large Language Models (LLMs) for automated financial news retrieval and stock price prediction.☆21Sep 9, 2024Updated last year
- ☆34Jan 25, 2024Updated 2 years ago
- Implementation of paper 'Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing'☆23Jun 9, 2024Updated last year
- Official implementation of the WASP web agent security benchmark☆75Aug 12, 2025Updated 7 months ago
- Test equality between a black-box LLM API and a reference distribution☆12Oct 29, 2024Updated last year
- Implementation for "RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content"☆23Jul 28, 2024Updated last year
- ☆79Feb 18, 2026Updated last month
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Jul 3, 2021Updated 4 years ago
- ☆25Jun 16, 2024Updated last year