inspire-group / tta_riskLinks
☆11Updated 2 years ago
Alternatives and similar repositories for tta_risk
Users that are interested in tta_risk are comparing it to the libraries listed below
Sorting:
- Code for CVPR 2023 Robust Generalization against Photon-Limited Corruptions via Worst-Case Sharpness Minimization☆13Updated 2 years ago
- ☆13Updated 5 months ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆76Updated last year
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆11Updated 2 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆17Updated 9 months ago
- ☆43Updated 2 years ago
- ICCV 2023 - AdaptGuard: Defending Against Universal Attacks for Model Adaptation☆11Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 7 months ago
- ☆86Updated 2 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆30Updated last month
- [ICLR 2024] Towards Elminating Hard Label Constraints in Gradient Inverision Attacks☆13Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- This is the code of ICLR 2022 Oral paper 'Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Au…☆30Updated last year
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆19Updated last month
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆14Updated 3 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆19Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆16Updated last year
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆30Updated 3 months ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Updated last year
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆21Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆38Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆15Updated 9 months ago
- [ICLR 2023] Official repository of the paper "Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning"☆17Updated 2 years ago
- ☆19Updated 4 months ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 8 months ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated 2 years ago
- ☆46Updated last year
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆19Updated 3 months ago