Emoji Attack [ICML 2025]
☆41Jul 15, 2025Updated 7 months ago
Alternatives and similar repositories for EmojiAttack
Users that are interested in EmojiAttack are comparing it to the libraries listed below
Sorting:
- [ACM MM 2024] ReToMe-VA: Recursive Token Merging for Video Diffusion-based Unrestricted Adversarial Attack☆14Dec 20, 2024Updated last year
- ☆24Feb 17, 2026Updated 2 weeks ago
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆59Mar 2, 2026Updated last week
- ☆14Feb 26, 2025Updated last year
- Code for “SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation(ICLR 2025)”☆24Oct 23, 2025Updated 4 months ago
- ☆40May 17, 2025Updated 9 months ago
- ☆16Feb 8, 2024Updated 2 years ago
- [ACM MM2023] Code Release of GCMA: Generative Cross-Modal Transferable Adversarial Attacks from Images to Videos☆12Mar 29, 2024Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 5 months ago
- [CVPR 2025] Official implementation for JOOD "Playing the Fool: Jailbreaking LLMs and Multimodal LLMs with Out-of-Distribution Strategy"☆21Jun 11, 2025Updated 8 months ago
- (TMLR) Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning☆26Oct 4, 2025Updated 5 months ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆51Jan 11, 2025Updated last year
- Data and code for "Fake News in Sheep's Clothing: Robust Fake News Detection Against LLM-Empowered Style Attacks" (KDD 2024)☆45Jan 23, 2025Updated last year
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆22Aug 28, 2025Updated 6 months ago
- About Official PyTorch implementation of "Query-Efficient Black-Box Red Teaming via Bayesian Optimization" (ACL'23)☆15Jul 9, 2023Updated 2 years ago
- Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024☆19Feb 12, 2025Updated last year
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆142Apr 7, 2025Updated 11 months ago
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆25Nov 10, 2024Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆228Feb 3, 2026Updated last month
- ☆59Jun 5, 2024Updated last year
- Code for the paper "AsFT: Anchoring Safety During LLM Fune-Tuning Within Narrow Safety Basin".☆36Jul 10, 2025Updated 7 months ago
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆39Feb 3, 2026Updated last month
- ☆26Feb 14, 2024Updated 2 years ago
- [NeurIPS 2024] Lumen: a Large multimodal model with versatile vision-centric capabilities☆25Sep 27, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- [AAAI2022] Code Release of Attacking Video Recognition Models with Bullet-Screen Comments☆25Mar 30, 2024Updated last year
- ☆73Mar 30, 2025Updated 11 months ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆50Nov 5, 2025Updated 4 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆36Mar 22, 2025Updated 11 months ago
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆80Jun 6, 2024Updated last year
- Enterprise AI Security Platform - Real-time firewall protection for LLM applications against prompt injection, data leakage, and function…☆23Sep 14, 2025Updated 5 months ago
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Dec 18, 2024Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- ☆12May 6, 2022Updated 3 years ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆165May 2, 2025Updated 10 months ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆36Oct 15, 2023Updated 2 years ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆192Jun 26, 2025Updated 8 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year