WUSTL-CSPL / SlowLiDARLinks
☆12Updated last year
Alternatives and similar repositories for SlowLiDAR
Users that are interested in SlowLiDAR are comparing it to the libraries listed below
Sorting:
- [TMLR 2024] On the Adversarial Robustness of Camera-based 3D Object Detection☆26Updated last year
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆79Updated 4 years ago
- [CVPR 2022] Shape-invariant Adversarial Point Clouds☆48Updated last year
- Implementation of "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches"☆24Updated 2 years ago
- A Backdoor Attack against 3D Point Cloud Classifiers (ICCV2021)☆19Updated 3 years ago
- The code of "Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds" CVPR 2024☆34Updated last year
- The official implementation for "Towards Transferable Targeted 3D Adversarial Attack in the Physical World" (CVPR, 2024))☆32Updated last year
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆17Updated 3 years ago
- [CVPR 2024] "Data Poisoning based Backdoor Attacks to Contrastive Learning": official code implementation.☆16Updated 6 months ago
- The code for AAAI2023 (Generating Transferable 3D Adversarial Point Cloud via Random Perturbation Factorization)☆12Updated 2 years ago
- http://arxiv.org/abs/2403.17301☆45Updated 4 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆16Updated 5 months ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆71Updated 2 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆12Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆37Updated last year
- Official Code for reproductivity of the NeurIPS 2023 paper: Adversarial Examples Are Not Real Features☆16Updated last year
- [NeurIPS 2022] GAMA: Generative Adversarial Multi-Object Scene Attacks☆17Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 5 months ago
- ☆14Updated 2 years ago
- A PyTorch port of the Neural 3D Mesh Renderer☆12Updated 3 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆17Updated last year
- The official implementation of CVPR 2025 paper "Invisible Backdoor Attack against Self-supervised Learning"☆15Updated last month
- [ICLR 2022] "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?" by Yonggan Fu, Shunyao Zhang, Shang Wu…☆34Updated 3 years ago
- ☆57Updated 2 months ago
- ☆17Updated 3 years ago
- [NeurIPS2023] Black-box Backdoor Defense via Zero-shot Image Purification☆11Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated last year
- ☆24Updated 11 months ago
- ☆15Updated 8 months ago