Safety at Scale: A Comprehensive Survey of Large Model and Agent Safety
โ263Apr 12, 2026Updated 3 weeks ago
Alternatives and similar repositories for Awesome-Large-Model-Safety
Users that are interested in Awesome-Large-Model-Safety are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Modelsโ297Mar 13, 2026Updated last month
- ๐ up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.โ541Apr 17, 2026Updated 2 weeks ago
- โ28Feb 19, 2025Updated last year
- โ24Feb 17, 2026Updated 2 months ago
- โ14Feb 26, 2025Updated last year
- End-to-end encrypted cloud storage - Proton Drive โข AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacksโ31Nov 2, 2025Updated 6 months ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systemsโ226Dec 22, 2024Updated last year
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIPโ44Feb 3, 2026Updated 3 months ago
- Code for ICCV2025 paperโโIDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselvesโ17Jul 11, 2025Updated 9 months ago
- Emoji Attack [ICML 2025]โ41Jul 15, 2025Updated 9 months ago
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Survโฆโ201Feb 6, 2026Updated 3 months ago
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Modelsโ23Oct 22, 2024Updated last year
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).โ1,949Apr 2, 2026Updated last month
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as โฆโ82Updated this week
- Serverless GPU API endpoints on Runpod - Get Bonus Credits โข AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provideโฆโ1,835Apr 18, 2026Updated 2 weeks ago
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster โฆโ62Apr 29, 2026Updated last week
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Modelsโ71Aug 7, 2025Updated 8 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'โ24May 20, 2025Updated 11 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Imagesโ42Jan 25, 2024Updated 2 years ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"โ16Jul 15, 2024Updated last year
- โ185Oct 31, 2025Updated 6 months ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Trainingโ32Jan 9, 2022Updated 4 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Imageโ36Oct 29, 2025Updated 6 months ago
- Simple, predictable pricing with DigitalOcean hosting โข AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomersโ16Oct 24, 2024Updated last year
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretrainingโ19Feb 26, 2025Updated last year
- โ21Mar 17, 2025Updated last year
- [ACM MM2023] Code Release of GCMA: Generative Cross-Modal Transferable Adversarial Attacks from Images to Videosโ12Mar 29, 2024Updated 2 years ago
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!โ15Apr 8, 2025Updated last year
- โ21Jun 18, 2025Updated 10 months ago
- A survey on harmful fine-tuning attack for large language model (ACM CSUR)โ239Updated this week
- this is for the ACM MM paper---Backdoor Attack on Crowd Countingโ17Jul 10, 2022Updated 3 years ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"โ62Aug 8, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting โข AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Open-source red teaming framework for MLLMs with 42+ attack methodsโ242Mar 25, 2026Updated last month
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Modelsโ30Mar 15, 2025Updated last year
- Code for Transferable Unlearnable Examplesโ22Mar 11, 2023Updated 3 years ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)โ85Oct 23, 2024Updated last year
- A collection of resources on attacks and defenses targeting text-to-image diffusion modelsโ98Dec 20, 2025Updated 4 months ago
- Agent Security Bench (ASB)โ228Apr 16, 2026Updated 2 weeks ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-upsโ51Dec 23, 2024Updated last year