ZhangHangTao / Awesome-Embodied-AI-SafetyLinks
Focused on the safety and security of Embodied AI
β42Updated last week
Alternatives and similar repositories for Awesome-Embodied-AI-Safety
Users that are interested in Awesome-Embodied-AI-Safety are comparing it to the libraries listed below
Sorting:
- [ICLR 2024 Spotlight π₯ ] - [ Best Paper Award SoCal NLP 2023 π] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modalβ¦β56Updated last year
- β18Updated 7 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2β¦β53Updated 2 months ago
- β43Updated 6 months ago
- A package that achieves 95%+ transfer attack success rate against GPT-4β20Updated 7 months ago
- β54Updated 2 weeks ago
- β22Updated 9 months ago
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectorβ¦β26Updated 6 months ago
- β47Updated 9 months ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safetyβ40Updated 3 weeks ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Imagesβ35Updated last year
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]β60Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systemsβ199Updated 5 months ago
- β73Updated 10 months ago
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbreβ¦β20Updated last week
- β46Updated 2 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)β145Updated 2 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking β¦β28Updated 7 months ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Modelsβ58Updated 2 years ago
- Benchmarking Physical Risk Awareness of Foundation Model-based Embodied AI Agentsβ18Updated 6 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking β¦β19Updated 7 months ago
- Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Generator, Generβ¦β17Updated 7 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"β28Updated 3 months ago
- β31Updated 2 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"β49Updated 4 months ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Modelsβ27Updated 5 months ago
- Official repo of Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Roboticsβ29Updated 2 months ago
- β26Updated 2 years ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as β¦β64Updated this week
- Accepted by ECCV 2024β130Updated 7 months ago