xiaogang00 / white-paper-for-large-model-security-and-privacy
The white paper which discusses the security and privacy problems of large models.
☆16Updated last year
Related projects: ⓘ
- This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation …☆17Updated 2 years ago
- Code and data for our paper "Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark"…☆47Updated last year
- A SAT solver written in Python 3.2 using three different algorithms; DPLL, Hill Climbing, and Genetic☆9Updated 9 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆30Updated 3 years ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆20Updated 11 months ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆22Updated last year
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆32Updated 2 months ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 3 years ago
- Codebase for the paper "Adversarial Attacks on Time Series"☆18Updated 5 years ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆26Updated 2 years ago
- ☆22Updated 3 years ago
- Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"☆24Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆10Updated 5 months ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆56Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents"☆29Updated 3 months ago
- Code for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-…☆34Updated 3 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆26Updated 4 years ago
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆33Updated 6 months ago
- ☆16Updated 10 months ago
- [ICML 2021] Information Obfuscation of Graph Neural Networks☆36Updated 3 years ago
- ☆23Updated last year
- Code for the paper "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models" (EMNLP 2021)☆20Updated 2 years ago
- NLP dataset: Chinese Android privacy policy dataset☆16Updated 2 months ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆20Updated 3 months ago
- CIKM 2021 Full Paper: FedMatch: Federated Learning Over Heterogeneous Question Answering Data☆12Updated 2 years ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆25Updated 3 months ago
- ☆6Updated 2 years ago
- ☆15Updated last year
- ☆20Updated 2 years ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆67Updated 2 weeks ago