YitingQu / meme-evolutionLinks
☆14Updated last year
Alternatives and similar repositories for meme-evolution
Users that are interested in meme-evolution are comparing it to the libraries listed below
Sorting:
- Code and data for our paper "Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark"…☆52Updated 2 years ago
- ☆86Updated 4 months ago
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Updated 10 months ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆50Updated 7 months ago
- Code for paper "SrcMarker: Dual-Channel Source Code Watermarking via Scalable Code Transformations" (IEEE S&P 2024)☆33Updated last year
- ☆161Updated 11 months ago
- Seminar 2022☆23Updated this week
- ☆19Updated last year
- Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024☆19Updated 10 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆55Updated 9 months ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated 2 years ago
- ☆30Updated 11 months ago
- Hidden backdoor attack on NLP systems☆47Updated 4 years ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Updated 4 years ago
- Fingerprint large language models☆47Updated last year
- Code for Voice Jailbreak Attacks Against GPT-4o.☆36Updated last year
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 6 months ago
- The official repository of the paper "The Digital Cybersecurity Expert: How Far Have We Come?" presented in IEEE S&P 2025☆23Updated 7 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆216Updated last month
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆34Updated last year
- Code and data of the ACL 2021 paper "Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution"☆16Updated 4 years ago
- Watermarking Text Generated by Black-Box Language Models☆40Updated 2 years ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆75Updated last year
- A list of recent adversarial attack and defense papers (including those on large language models)☆45Updated last week
- ☆13Updated last year
- [AAAI 2024] DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models☆12Updated last year
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆43Updated 3 years ago
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆25Updated last year
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆23Updated 4 months ago