WUSTL-CSPL / SlowLiDARLinks
☆11Updated last year
Alternatives and similar repositories for SlowLiDAR
Users that are interested in SlowLiDAR are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆36Updated last year
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆79Updated 3 years ago
- [CVPR 2024] "Data Poisoning based Backdoor Attacks to Contrastive Learning": official code implementation.☆16Updated 5 months ago
- ☆47Updated last year
- ☆23Updated 10 months ago
- ☆30Updated 2 months ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆59Updated 2 years ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆72Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆16Updated last year
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆17Updated 3 years ago
- REAP: A Large-Scale Realistic Adversarial Patch Benchmark☆27Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 11 months ago
- A PyTorch port of the Neural 3D Mesh Renderer☆12Updated 2 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆42Updated 2 years ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment☆17Updated last month
- ☆44Updated 7 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆15Updated 4 months ago
- The official implementation for "Towards Transferable Targeted 3D Adversarial Attack in the Physical World" (CVPR, 2024))☆32Updated 11 months ago
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆22Updated last year
- ☆30Updated last year
- [ICLR 2022] "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?" by Yonggan Fu, Shunyao Zhang, Shang Wu…☆33Updated 3 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆55Updated 3 months ago
- ☆31Updated 3 years ago
- ☆109Updated last month
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 4 months ago
- Adversarial Texture for Fooling Person Detectors in the Physical World☆59Updated 8 months ago
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆12Updated last year
- Code for Prior-Guided Adversarial Initialization for Fast Adversarial Training (ECCV2022)☆26Updated 2 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago