weizeming / ICML-2024-SAM-ATLinks
☆25Updated last year
Alternatives and similar repositories for ICML-2024-SAM-AT
Users that are interested in ICML-2024-SAM-AT are comparing it to the libraries listed below
Sorting:
- Certified robustness "for free" using off-the-shelf diffusion models and classifiers☆44Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆54Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- ☆19Updated 2 years ago
- ☆14Updated 2 years ago
- ☆24Updated last year
- Code for the paper "Better Diffusion Models Further Improve Adversarial Training" (ICML 2023)☆146Updated 2 years ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated 2 years ago
- ☆33Updated 7 months ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆37Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Updated last year
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated last year
- [NeurIPS 2023] Code for the paper "Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threa…☆39Updated last year
- [CVPR 2024] This repository includes the official implementation our paper "Revisiting Adversarial Training at Scale"☆20Updated last year
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆33Updated 2 years ago
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆24Updated last week
- ☆53Updated 2 years ago
- ICML 2024 Paper "Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies"☆17Updated last year
- [SatML 2024] Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk☆15Updated 9 months ago
- ☆33Updated 3 years ago
- ☆23Updated 2 years ago
- ☆89Updated 2 years ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆58Updated 11 months ago
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆141Updated 7 months ago
- [CVPR23] "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations" by Lei Hsi…☆24Updated 3 months ago
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆14Updated 10 months ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Updated last year
- PDM-based Purifier☆22Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated last week
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆16Updated 3 years ago