THU-BPM / Watermark-Radioactivity-Attack
Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?".
β14Updated 2 months ago
Alternatives and similar repositories for Watermark-Radioactivity-Attack
Users that are interested in Watermark-Radioactivity-Attack are comparing it to the libraries listed below
Sorting:
- β18Updated last year
- π₯π₯π₯Breaking long thought processes of o1-like LLMs, such as DeepSeek-R1, QwQβ29Updated 2 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.β30Updated 6 months ago
- β38Updated 9 months ago
- Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)β34Updated 11 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"β36Updated 6 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritizationβ22Updated 10 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024β33Updated 11 months ago
- β21Updated 2 months ago
- [ICML2024] Adaptive Text Watermark for Large Language Modelsβ19Updated 5 months ago
- Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"β25Updated 7 months ago
- β41Updated last month
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuningβ9Updated 6 months ago
- Code and data for paper "Can Watermarked LLMs be Identified by Users via Crafted Prompts?" Accepted by ICLR 2025 (Spotlight)β20Updated 4 months ago
- Repository for Towards Codable Watermarking for Large Language Modelsβ36Updated last year
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"β23Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)β42Updated 5 months ago
- β20Updated 5 months ago
- multi-bit language model watermarking (NAACL 24)β13Updated 7 months ago
- β18Updated last month
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Textβ32Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"β53Updated last year
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RLβ18Updated this week
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as β¦β63Updated this week
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794β¦β19Updated 9 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024β74Updated 7 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107β17Updated 9 months ago
- β24Updated 3 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"β46Updated 4 months ago
- β58Updated 10 months ago