Safety at Scale: A Comprehensive Survey of Large Model and Agent Safety
☆255Mar 18, 2026Updated 3 weeks ago
Alternatives and similar repositories for Awesome-Large-Model-Safety
Users that are interested in Awesome-Large-Model-Safety are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆291Mar 13, 2026Updated last month
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆531Apr 6, 2026Updated last week
- ☆28Feb 19, 2025Updated last year
- ☆24Feb 17, 2026Updated last month
- ☆14Feb 26, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆31Nov 2, 2025Updated 5 months ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆227Dec 22, 2024Updated last year
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆44Feb 3, 2026Updated 2 months ago
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 9 months ago
- Emoji Attack [ICML 2025]☆41Jul 15, 2025Updated 9 months ago
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆199Feb 6, 2026Updated 2 months ago
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,926Apr 2, 2026Updated 2 weeks ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,818Apr 3, 2026Updated last week
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆61Apr 2, 2026Updated 2 weeks ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆70Aug 7, 2025Updated 8 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 10 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- ☆183Oct 31, 2025Updated 5 months ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 5 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated last year
- ☆21Mar 17, 2025Updated last year
- [ACM MM2023] Code Release of GCMA: Generative Cross-Modal Transferable Adversarial Attacks from Images to Videos☆12Mar 29, 2024Updated 2 years ago
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆15Apr 8, 2025Updated last year
- ☆19Jun 18, 2025Updated 9 months ago
- A survey on harmful fine-tuning attack for large language model (ACM CSUR)☆238Feb 25, 2026Updated last month
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆62Aug 8, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Open-source red teaming framework for MLLMs with 42+ attack methods☆241Mar 25, 2026Updated 3 weeks ago
- Agent Security Bench (ASB)☆214Oct 27, 2025Updated 5 months ago
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆28Mar 15, 2025Updated last year
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 3 years ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆83Oct 23, 2024Updated last year
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆96Dec 20, 2025Updated 3 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year