XuankunRong / Awesome-LVLM-SafetyLinks
A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Survey of Safety on Large Vision-Language Models: Attacks, Defenses, and Evaluations.
β156Updated 3 weeks ago
Alternatives and similar repositories for Awesome-LVLM-Safety
Users that are interested in Awesome-LVLM-Safety are comparing it to the libraries listed below
Sorting:
- [ICLR 2024 Spotlight π₯ ] - [ Best Paper Award SoCal NLP 2023 π] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modalβ¦β73Updated last year
- [NAACL 2025 Main] Official Implementation of MLLMU-Benchβ38Updated 7 months ago
- Accepted by ECCV 2024β169Updated last year
- An implementation for MLLM oversensitivity evaluationβ14Updated 11 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shiβ¦β66Updated last year
- Accepted by IJCAI-24 Survey Trackβ222Updated last year
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Modelsβ252Updated last month
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!β13Updated 6 months ago
- [ICLR 2024] Towards Elminating Hard Label Constraints in Gradient Inverision Attacksβ13Updated last year
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)β33Updated last year
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbreβ¦β43Updated 3 months ago
- π up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.β409Updated this week
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)β132Updated 6 months ago
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Geneβ¦β28Updated 3 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2β¦β58Updated 7 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.β78Updated 9 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)β169Updated 4 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as β¦β76Updated this week
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIPβ31Updated 4 months ago
- β51Updated 11 months ago
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Modelsβ312Updated last month
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]β65Updated 2 years ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systemsβ218Updated 10 months ago
- β52Updated 10 months ago
- β38Updated last year
- β48Updated last year
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning β¦β78Updated 2 months ago
- β26Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Promptsβ177Updated 4 months ago
- β13Updated last year