Jielin-Qiu / MMWatermark-RobustnessLinks
Evaluating Durability: Benchmark Insights into Multimodal Watermarking
☆12Updated last year
Alternatives and similar repositories for MMWatermark-Robustness
Users that are interested in MMWatermark-Robustness are comparing it to the libraries listed below
Sorting:
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆37Updated last year
- [SatML 2024] Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk☆16Updated 9 months ago
- Official Implementation of Avoiding spurious correlations via logit correction☆17Updated 2 years ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆84Updated 2 years ago
- [DMLR 2024] Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift☆38Updated last year
- Repository for research works and resources related to model reprogramming <https://arxiv.org/abs/2202.10629>☆64Updated 3 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Updated last year
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆30Updated 6 months ago
- Code for paper "Out-of-Domain Robustness via Targeted Augmentations"☆14Updated 2 years ago
- Respect to the input tensor instead of paramters of NN☆21Updated 3 years ago
- [NeurIPS 2025] What Makes a Reward Model a Good Teacher? An Optimization Perspective☆41Updated 3 months ago
- ☆65Updated 3 months ago
- ☆27Updated last year
- Attack AlphaZero Go agents (NeurIPS 2022)☆22Updated 3 years ago
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16Updated 2 years ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆58Updated 11 months ago
- Certified Patch Robustness via Smoothed Vision Transformers☆42Updated 4 years ago
- ☆23Updated 6 months ago
- ☆27Updated 10 months ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Updated 2 years ago
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆57Updated 4 months ago
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆25Updated 3 weeks ago
- [ICLR2025 Spotlight] Advantage-Guided Distillation for Preference Alignment in Small Language Models☆24Updated 11 months ago
- ☆21Updated last year
- LISA for ICML 2022☆52Updated 2 years ago
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023☆39Updated 2 months ago
- Code and data for "ImgTrojan: Jailbreaking Vision-Language Models with ONE Image"☆24Updated 9 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago