LUMIA-Group / HuRefLinks
Official implementation for "HuRef: HUman-REadable Fingerprint for Large Language Models" (NeurIPS2024)
☆14Updated 6 months ago
Alternatives and similar repositories for HuRef
Users that are interested in HuRef are comparing it to the libraries listed below
Sorting:
- ☆32Updated last month
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Updated last year
- ☆46Updated 3 years ago
- A list of recent papers about adversarial learning☆277Updated this week
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆27Updated 9 months ago
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆189Updated 3 months ago
- ☆27Updated 2 years ago
- Composite Backdoor Attacks Against Large Language Models☆21Updated last year
- Watermarking LLM papers up-to-date☆12Updated 2 years ago
- [NeurIPS 2025 D&B] BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model☆23Updated 5 months ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆25Updated 2 years ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆262Updated 2 months ago
- ☆21Updated 3 years ago
- ☆26Updated last year
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆63Updated last year
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆173Updated 7 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Updated 4 months ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆297Updated last month
- ☆18Updated 3 years ago
- [NDSS'25] The official implementation of safety misalignment.☆17Updated last year
- ☆16Updated last year
- ☆56Updated last year
- ☆27Updated 10 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆216Updated last month
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆63Updated 11 months ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆231Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆19Updated last year
- ☆223Updated 4 months ago