snu-mllab / DiscreteBlockBayesAttack
Official PyTorch implementation of "Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization" (ICML'22)
☆23Updated last year
Related projects ⓘ
Alternatives and complementary repositories for DiscreteBlockBayesAttack
- This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calc…☆12Updated last year
- ☆53Updated last year
- [NeurIPS 2021] Fast Certified Robust Training with Short Warmup☆23Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆41Updated 6 months ago
- ☆12Updated 3 months ago
- Attack AlphaZero Go agents (NeurIPS 2022)☆20Updated last year
- "Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers" (NeurIPS 2019, previously called "A Stratified Approach …☆17Updated 5 years ago
- ☆25Updated 5 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆12Updated 3 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆28Updated 4 months ago
- Pytorch implementation of NPAttack☆12Updated 4 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆55Updated last year
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆16Updated last year
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆11Updated 2 years ago
- Code for Arxiv When Do Universal Image Jailbreaks Transfer Between Vision-Language Models?☆15Updated this week
- ☆9Updated 3 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆25Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆24Updated this week
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆44Updated 5 months ago
- ACL 2021 - Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble☆15Updated last year
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Updated 4 years ago
- Pytorch implementation of Adversarially Robust Distillation (ARD)☆59Updated 5 years ago
- Starter kit and data loading code for the Trojan Detection Challenge NeurIPS 2022 competition☆33Updated last year
- ☆34Updated 2 weeks ago
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆12Updated last year
- the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral☆57Updated 3 years ago
- Implementation for <Understanding Robust Overftting of Adversarial Training and Beyond> in ICML'22.☆12Updated 2 years ago