【ACL 2024】 SALAD benchmark & MD-Judge
☆175Mar 8, 2025Updated last year
Alternatives and similar repositories for SALAD-BENCH
Users that are interested in SALAD-BENCH are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆60Jul 21, 2025Updated 9 months ago
- ☆53Feb 8, 2025Updated last year
- [World-Model-Survey-2024] Paper list and projects for World Model☆15Oct 31, 2024Updated last year
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆227Sep 29, 2024Updated last year
- ☆11Oct 25, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆202Jun 26, 2025Updated 10 months ago
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆79Jan 23, 2025Updated last year
- ☆129Feb 3, 2025Updated last year
- ☆15Jun 6, 2024Updated last year
- LLM evaluation.☆16Nov 7, 2023Updated 2 years ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆348Feb 23, 2024Updated 2 years ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,833Apr 18, 2026Updated last week
- ☆11Nov 12, 2024Updated last year
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆103Jun 16, 2025Updated 10 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆79Mar 1, 2025Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- [NeurIPS 2025 Spotlight] Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning.☆138Mar 31, 2026Updated last month
- Accepted by ECCV 2024☆203Oct 15, 2024Updated last year
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆114Feb 13, 2026Updated 2 months ago
- ☆19Mar 25, 2024Updated 2 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,840Jun 17, 2025Updated 10 months ago
- Diagnostic Framework for LLMs and MLLMs☆36Mar 2, 2026Updated last month
- JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]☆174Mar 3, 2025Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- ☆45Jun 19, 2025Updated 10 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆579Feb 27, 2026Updated 2 months ago
- ☆30May 22, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆582Apr 4, 2025Updated last year
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆119Dec 2, 2024Updated last year
- ☆14Jan 6, 2025Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆20Oct 2, 2024Updated last year
- ☆48Jul 14, 2024Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 8 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆843Mar 30, 2026Updated last month
- [CVPR2024] This is the official implement of MP5☆108Jun 30, 2024Updated last year
- A simple evaluation of generative language models and safety classifiers.☆98Apr 15, 2026Updated 2 weeks ago
- ☆20Jul 24, 2024Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,599Nov 24, 2025Updated 5 months ago