ZhangHangTao / Awesome-Embodied-AI-SafetyLinks
Focused on the safety and security of Embodied AI
☆57Updated 2 months ago
Alternatives and similar repositories for Awesome-Embodied-AI-Safety
Users that are interested in Awesome-Embodied-AI-Safety are comparing it to the libraries listed below
Sorting:
- A toolbox for benchmarking Multimodal LLM Agents trustworthiness across truthfulness, controllability, safety and privacy dimensions thro…☆50Updated 2 months ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆364Updated 2 weeks ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆216Updated 2 weeks ago
- ☆46Updated 8 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆57Updated 5 months ago
- ☆59Updated 3 months ago
- Official repo of Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics☆34Updated 2 weeks ago
- This is the official repository for the ICLR 2025 accepted paper Badrobot: Manipulating Embodied LLMs in the Physical World.☆31Updated 2 months ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆67Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆209Updated 8 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆27Updated 10 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆162Updated 2 months ago
- ☆18Updated 9 months ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆48Updated last month
- Accepted by ECCV 2024☆149Updated 10 months ago
- Benchmarking Physical Risk Awareness of Foundation Model-based Embodied AI Agents☆20Updated 9 months ago
- ☆102Updated last year
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajector…☆28Updated 9 months ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆30Updated 8 months ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆62Updated 2 years ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆38Updated last year
- Accepted by IJCAI-24 Survey Track☆211Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆184Updated 6 months ago
- ☆47Updated last year
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆64Updated last year
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment☆21Updated 3 months ago
- Summaries of ICML 2024 papers☆11Updated last year
- A package that achieves 95%+ transfer attack success rate against GPT-4☆23Updated 10 months ago
- ☆58Updated 5 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆164Updated 2 months ago