WUSTL-CSPL / SlowLiDARLinks
☆12Updated last year
Alternatives and similar repositories for SlowLiDAR
Users that are interested in SlowLiDAR are comparing it to the libraries listed below
Sorting:
- [TMLR 2024] On the Adversarial Robustness of Camera-based 3D Object Detection☆28Updated last year
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆79Updated 4 years ago
- [CVPR 2022] Shape-invariant Adversarial Point Clouds☆49Updated last year
- Implementation of "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches"☆24Updated 3 years ago
- A Backdoor Attack against 3D Point Cloud Classifiers (ICCV2021)☆19Updated 3 years ago
- The code of "Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds" CVPR 2024☆35Updated last year
- ☆25Updated last year
- ☆16Updated 9 months ago
- [CVPR 2024] "Data Poisoning based Backdoor Attacks to Contrastive Learning": official code implementation.☆16Updated 7 months ago
- http://arxiv.org/abs/2403.17301☆48Updated 6 months ago
- The Pytorch implementation for the paper "Fusion is Not Enough: Single Modal Attack on Fusion Models for 3D Object Detection"☆18Updated last year
- The official implementation for "Towards Transferable Targeted 3D Adversarial Attack in the Physical World" (CVPR, 2024))☆35Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- The code for AAAI2023 (Generating Transferable 3D Adversarial Point Cloud via Random Perturbation Factorization)☆12Updated 2 years ago
- ☆68Updated last week
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆71Updated 2 years ago
- [NeurIPS 2022] GAMA: Generative Adversarial Multi-Object Scene Attacks☆18Updated 2 years ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆16Updated 6 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated last year
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆23Updated 2 years ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆39Updated last year
- ☆24Updated last year
- List of state of the art papers, code, and other resources☆107Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆17Updated last year
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆18Updated 3 years ago
- Official Code for reproductivity of the NeurIPS 2023 paper: Adversarial Examples Are Not Real Features☆17Updated last year
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆12Updated last year
- This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection.☆26Updated 2 years ago
- ☆27Updated 3 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year