☆76Jan 21, 2026Updated 2 months ago
Alternatives and similar repositories for Foundation-Model-Paper-Notes
Users that are interested in Foundation-Model-Paper-Notes are comparing it to the libraries listed below
Sorting:
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆66Mar 20, 2023Updated 3 years ago
- ☆20Jan 15, 2024Updated 2 years ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 9 months ago
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajector…☆30Nov 15, 2025Updated 4 months ago
- ☆60Jun 5, 2024Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Apr 8, 2024Updated last year
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆111Sep 27, 2024Updated last year
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆80Jun 6, 2024Updated last year
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆58Dec 20, 2024Updated last year
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆87Nov 28, 2023Updated 2 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 9 months ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,802Updated this week
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆36Jul 10, 2025Updated 8 months ago
- Text file containing NSFW words aggregated from various sources.☆10Aug 23, 2020Updated 5 years ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆72Sep 6, 2023Updated 2 years ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆244Jan 27, 2026Updated last month
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 10 months ago
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- ☆55Dec 7, 2024Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆517Mar 10, 2026Updated last week
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆269May 13, 2024Updated last year
- ☆13Jan 14, 2026Updated 2 months ago
- ☆13Jun 25, 2025Updated 8 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- ☆78Dec 19, 2024Updated last year
- ☆12May 6, 2022Updated 3 years ago
- Universal Adversarial Perturbations for Vision-Language Pre-trained Models☆24Aug 8, 2025Updated 7 months ago
- ☆181Oct 31, 2025Updated 4 months ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆54Sep 19, 2023Updated 2 years ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆96Dec 20, 2025Updated 3 months ago
- ☆109Feb 16, 2024Updated 2 years ago
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 6 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,899Updated this week
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆110Dec 28, 2022Updated 3 years ago
- (AAAI 24) Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Models☆11Oct 12, 2024Updated last year
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆56Feb 7, 2025Updated last year
- ☆40Nov 4, 2024Updated last year