☆34Nov 12, 2024Updated last year
Alternatives and similar repositories for rapidresponsebench
Users that are interested in rapidresponsebench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆28Jun 11, 2025Updated 10 months ago
- Red Queen Dataset and data generation template☆26Dec 26, 2025Updated 4 months ago
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆21Mar 22, 2025Updated last year
- ☆15Jul 24, 2022Updated 3 years ago
- ☆31Sep 23, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated 2 years ago
- ☆39May 21, 2025Updated 11 months ago
- ☆13Sep 12, 2024Updated last year
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆79Jan 23, 2025Updated last year
- ☆18Apr 15, 2024Updated 2 years ago
- ☆12Sep 29, 2024Updated last year
- Code Repository for Blog - How to Productionize Large Language Models (LLMs)☆12Mar 27, 2024Updated 2 years ago
- Improving Alignment and Robustness with Circuit Breakers☆261Sep 24, 2024Updated last year
- ☆199Nov 26, 2023Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Open Source Replication of Anthropic's Alignment Faking Paper☆56Apr 4, 2025Updated last year
- Code for the paper "On the Adversarial Robustness of Visual Transformers"☆58Nov 18, 2021Updated 4 years ago
- ☆10Oct 11, 2022Updated 3 years ago
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated last year
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆26Dec 23, 2020Updated 5 years ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97May 23, 2024Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆21Mar 25, 2024Updated 2 years ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆330May 13, 2025Updated 11 months ago
- Material for the series of seminars on Large Language Models☆34Apr 21, 2024Updated 2 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆20Apr 7, 2024Updated 2 years ago
- ☆12Apr 25, 2025Updated last year
- Auditing agents for fine-tuning safety☆21Oct 21, 2025Updated 6 months ago
- An intelligent agent utilizing Large Language Models (LLMs) for automated financial news retrieval and stock price prediction.☆21Sep 9, 2024Updated last year
- ☆129Feb 3, 2025Updated last year
- ☆34Jan 25, 2024Updated 2 years ago
- Implementation of paper 'Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing'☆24Jun 9, 2024Updated last year
- Test equality between a black-box LLM API and a reference distribution☆13Oct 29, 2024Updated last year
- Implementation for "RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content"☆24Jul 28, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆14Oct 17, 2024Updated last year
- LLM-based meme generator with templates☆14Dec 1, 2025Updated 5 months ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Jul 3, 2021Updated 4 years ago
- ☆25Jun 16, 2024Updated last year
- Corresponding code to "Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features" @ USENIX Secur…☆11Aug 5, 2019Updated 6 years ago
- ☆81Feb 18, 2026Updated 2 months ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago