WUSTL-CSPL / SlowLiDARLinks
☆12Updated 2 years ago
Alternatives and similar repositories for SlowLiDAR
Users that are interested in SlowLiDAR are comparing it to the libraries listed below
Sorting:
- [CVPR 2022] Shape-invariant Adversarial Point Clouds☆49Updated 2 years ago
- [TMLR 2024] On the Adversarial Robustness of Camera-based 3D Object Detection☆30Updated last year
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆80Updated 4 years ago
- The code of "Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds" CVPR 2024☆36Updated last year
- Implementation of "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches"☆25Updated 3 years ago
- The official implementation for "Towards Transferable Targeted 3D Adversarial Attack in the Physical World" (CVPR, 2024))☆38Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆19Updated last year
- http://arxiv.org/abs/2403.17301☆52Updated last week
- A Backdoor Attack against 3D Point Cloud Classifiers (ICCV2021)☆19Updated 4 years ago
- [CVPR 2024] "Data Poisoning based Backdoor Attacks to Contrastive Learning": official code implementation.☆16Updated 11 months ago
- ☆17Updated last month
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆20Updated 10 months ago
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆19Updated 3 years ago
- The code for AAAI2023 (Generating Transferable 3D Adversarial Point Cloud via Random Perturbation Factorization)☆13Updated 3 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 10 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Updated last year
- ☆26Updated last year
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆12Updated 2 years ago
- [NeurIPS 2022] GAMA: Generative Adversarial Multi-Object Scene Attacks☆19Updated 2 years ago
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆24Updated 2 years ago
- The Pytorch implementation for the paper "Fusion is Not Enough: Single Modal Attack on Fusion Models for 3D Object Detection"☆20Updated last year
- Official Code for reproductivity of the NeurIPS 2023 paper: Adversarial Examples Are Not Real Features☆16Updated last year
- ☆14Updated 3 years ago
- The official implementation of CVPR 2025 paper "Invisible Backdoor Attack against Self-supervised Learning"☆17Updated 6 months ago
- This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection.☆26Updated 2 years ago
- ☆30Updated last year
- Data-Efficient Backdoor Attacks☆20Updated 3 years ago
- Code for the paper "PAD: Patch-Agnostic Defense against Adversarial Patch Attacks" (CVPR 2024)☆27Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Updated last year
- ☆14Updated last year