Safety at Scale: A Comprehensive Survey of Large Model Safety
☆242Mar 18, 2026Updated last week
Alternatives and similar repositories for Awesome-Large-Model-Safety
Users that are interested in Awesome-Large-Model-Safety are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆285Mar 13, 2026Updated 2 weeks ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆521Updated this week
- ☆26Feb 19, 2025Updated last year
- ☆24Feb 17, 2026Updated last month
- ☆14Feb 26, 2025Updated last year
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆31Nov 2, 2025Updated 4 months ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆225Dec 22, 2024Updated last year
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆42Feb 3, 2026Updated last month
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 8 months ago
- Emoji Attack [ICML 2025]☆41Jul 15, 2025Updated 8 months ago
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,899Mar 16, 2026Updated last week
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Mar 20, 2026Updated last week
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 6 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆67Aug 7, 2025Updated 7 months ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,802Mar 20, 2026Updated last week
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 10 months ago
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆197Feb 6, 2026Updated last month
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆43Jan 25, 2024Updated 2 years ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 4 months ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated last year
- ☆21Mar 17, 2025Updated last year
- [ACM MM2023] Code Release of GCMA: Generative Cross-Modal Transferable Adversarial Attacks from Images to Videos☆12Mar 29, 2024Updated last year
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆15Apr 8, 2025Updated 11 months ago
- ☆18Jun 18, 2025Updated 9 months ago
- Agent Security Bench (ASB)☆201Oct 27, 2025Updated 4 months ago
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆63Mar 2, 2026Updated 3 weeks ago
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- Open-source red teaming framework for MLLMs with 42+ attack methods☆236Mar 16, 2026Updated last week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆62Aug 8, 2024Updated last year
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆27Mar 15, 2025Updated last year
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 3 years ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆83Oct 23, 2024Updated last year
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆96Dec 20, 2025Updated 3 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- A survey on harmful fine-tuning attack for large language model☆235Feb 25, 2026Updated last month