VITA-Group / Robust_Weight_Signatures
[ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang
☆16Updated 2 years ago
Alternatives and similar repositories for Robust_Weight_Signatures
Users that are interested in Robust_Weight_Signatures are comparing it to the libraries listed below
Sorting:
- ☆21Updated last year
- ☆54Updated 2 years ago
- ☆42Updated 3 months ago
- Codebase for decoding compressed trust.☆23Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 6 months ago
- ☆14Updated 7 months ago
- ☆36Updated 7 months ago
- ☆16Updated last week
- ☆20Updated 5 months ago
- Host CIFAR-10.2 Data Set☆13Updated 3 years ago
- ☆33Updated 4 months ago
- ☆42Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆93Updated 11 months ago
- ☆28Updated 10 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆61Updated 4 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 9 months ago
- ☆22Updated 2 months ago
- ☆23Updated 9 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆53Updated last year
- ☆21Updated last month
- ☆21Updated 9 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 3 weeks ago
- ☆10Updated 3 months ago
- Code for "Universal Adversarial Triggers Are Not Universal."☆17Updated last year
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆80Updated last year
- OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift. ICML 2024 and ICLRW-DMLR 2024☆20Updated 9 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆25Updated 10 months ago
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆18Updated last year
- ☆49Updated last year
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆20Updated last year