ATIpiu / SafeGenInject
全球AI攻防挑战赛—赛道一:大模型生图安全疫苗注入第二名解题方案
☆17Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for SafeGenInject
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆56Updated last month
- Figure it out: Analyzing-based Jailbreak Attack on Large Language Models☆16Updated 2 weeks ago
- Accepted by IJCAI-24 Survey Track☆159Updated 2 months ago
- ☆12Updated 3 months ago
- 针对大语言模型的对抗性攻击总结☆13Updated 11 months ago
- Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆87Updated 6 months ago
- some baseline attack method by pytorch☆11Updated 3 years ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆133Updated last week
- AI Model Security Reading Notes☆35Updated 3 months ago
- ☆74Updated 7 months ago
- 010Editor template for .abc (Open/HarmonyOS Ark Bytecode) files☆38Updated last month
- Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks (TPAMI 2022)☆28Updated last year
- ☆28Updated last year
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆72Updated last month
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆15Updated last month
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆20Updated 5 months ago
- ☆86Updated 9 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆31Updated 8 months ago
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆121Updated 9 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆108Updated 2 weeks ago
- A list of recent adversarial attack and defense papers (including those on large language models)☆27Updated this week
- JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and further assess …☆35Updated 4 months ago
- ☆24Updated 2 months ago
- ☆20Updated 4 months ago
- ☆33Updated 11 months ago
- Ghidra/IDA Pro plugins to load similarity result from binaryai.net☆77Updated last year
- ☆29Updated 6 months ago
- Adversarial Stickers: A Stealthy Attack Method in the Physical World (TPAMI 2022)☆33Updated last year
- ☆20Updated 2 months ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆183Updated 6 months ago