xingjunm / Awesome-Large-Model-Safety
Safety at Scale: A Comprehensive Survey of Large Model Safety
☆126Updated last month
Alternatives and similar repositories for Awesome-Large-Model-Safety:
Users that are interested in Awesome-Large-Model-Safety are comparing it to the libraries listed below
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆86Updated 5 months ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆249Updated last week
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆123Updated last month
- ☆45Updated 3 months ago
- Accepted by ECCV 2024☆115Updated 5 months ago
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models☆124Updated last month
- Accepted by IJCAI-24 Survey Track☆198Updated 7 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆140Updated last week
- ☆43Updated 7 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆53Updated this week
- ☆28Updated 5 months ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆23Updated 2 months ago
- ☆93Updated last year
- ☆79Updated last month
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆51Updated 8 months ago
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆22Updated this week
- ☆12Updated 4 months ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆210Updated 10 months ago
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆15Updated 4 months ago
- ☆16Updated last month
- ☆121Updated 6 months ago
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆135Updated last year
- ☆53Updated 3 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆50Updated this week
- A list of recent adversarial attack and defense papers (including those on large language models)☆37Updated this week
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆144Updated this week
- Agent Security Bench (ASB)☆66Updated last week
- ☆29Updated 3 months ago
- ☆78Updated 11 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆33Updated last year