AI45Lab / Awesome-Trustworthy-Embodied-AILinks
☆91Updated this week
Alternatives and similar repositories for Awesome-Trustworthy-Embodied-AI
Users that are interested in Awesome-Trustworthy-Embodied-AI are comparing it to the libraries listed below
Sorting:
- Official repo of Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics☆63Updated 5 months ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆224Updated last year
- ☆51Updated 11 months ago
- A toolbox for benchmarking Multimodal LLM Agents trustworthiness across truthfulness, controllability, safety and privacy dimensions thro…☆62Updated 3 weeks ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆174Updated 7 months ago
- Official GitHub repository for the paper "Adversarial Attacks on Robotic Vision Language Action Models"☆27Updated 8 months ago
- A curated list of awesome papers on dataset reduction, including dataset distillation (dataset condensation) and dataset pruning (coreset…☆59Updated last year
- A paper list of Awesome Latent Space.☆305Updated last week
- Focused on the safety and security of Embodied AI☆93Updated last month
- Codes for paper "SafeAgentBench: A Benchmark for Safe Task Planning of \\ Embodied LLM Agents"☆62Updated 11 months ago
- This is the official repository for the ICLR 2025 accepted paper Badrobot: Manipulating Embodied LLMs in the Physical World.☆40Updated 7 months ago
- Provide .bst files for NeurIPS latex template☆49Updated 9 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆81Updated last week
- Open-source red teaming framework for MLLMs with 37+ attack methods☆209Updated 2 weeks ago
- The code of "Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds" CVPR 2024☆36Updated last year
- ☆70Updated last year
- ☆109Updated last year
- IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models☆59Updated 2 years ago
- This repository contains the ViewFool and ImageNet-V proposed by the paper “ViewFool: Evaluating the Robustness of Visual Recognition to …☆33Updated 2 years ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆44Updated last year
- Responsible Robotic Manipulation☆15Updated 5 months ago
- [ICLR 2026] The official code for "Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models"☆22Updated this week
- ☆10Updated 8 months ago
- Accepted by IJCAI-24 Survey Track☆230Updated last year
- Code and data for paper "Can Watermarked LLMs be Identified by Users via Crafted Prompts?" Accepted by ICLR 2025 (Spotlight)☆28Updated last year
- [EMNLP 2025] The code repo of paper "X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from Multi-Turn Jailbreaks without Com…☆38Updated 2 months ago
- [NeurIPS 2025 Spotlight] Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning.☆108Updated 2 weeks ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆58Updated last year
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆301Updated 2 weeks ago
- ☆35Updated last year