XuandongZhao / pf-decodingLinks
[ICLR 2025] Permute-and-Flip: An optimally robust and watermarkable decoder for LLMs
☆19Updated 5 months ago
Alternatives and similar repositories for pf-decoding
Users that are interested in pf-decoding are comparing it to the libraries listed below
Sorting:
- Official Implementation of the paper "Three Bricks to Consolidate Watermarks for LLMs"☆48Updated last year
- ☆20Updated last year
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆62Updated 7 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆34Updated 9 months ago
- Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?". (ACL 2025 Main)☆16Updated 2 months ago
- ☆34Updated 11 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97Updated last year
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆106Updated last month
- ☆24Updated 6 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆35Updated last year
- ☆25Updated 6 months ago
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆34Updated last year
- Official code implementation of SKU, Accepted by ACL 2024 Findings☆18Updated 8 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆82Updated 11 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆30Updated last year
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆53Updated last year
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆111Updated last year
- ☆38Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 11 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆60Updated 11 months ago
- ☆27Updated last year
- Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)☆38Updated last year
- ☆21Updated 5 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆100Updated 6 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆82Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆158Updated 5 months ago
- CMD: a framework for Context-aware Model self-Detoxification (EMNLP2024 Long Paper)☆16Updated 6 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆64Updated 7 months ago
- ☆23Updated last year
- NeurIPS'24 - LLM Safety Landscape☆28Updated 6 months ago