YitingQu / meme-evolutionLinks
☆14Updated last year
Alternatives and similar repositories for meme-evolution
Users that are interested in meme-evolution are comparing it to the libraries listed below
Sorting:
- Code and data for our paper "Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark"…☆49Updated 2 years ago
- ☆82Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆41Updated 3 months ago
- Fingerprint large language models☆41Updated last year
- A curated list of trustworthy Generative AI papers. Daily updating...☆73Updated last year
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated last year
- The official repository of the paper "The Digital Cybersecurity Expert: How Far Have We Come?" presented in IEEE S&P 2025☆21Updated 3 months ago
- ☆159Updated 7 months ago
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆17Updated 2 weeks ago
- ☆23Updated 7 months ago
- Seminar 2022☆21Updated last month
- ☆19Updated last year
- Accepted by IJCAI-24 Survey Track☆212Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆187Updated 6 months ago
- Hidden backdoor attack on NLP systems☆47Updated 3 years ago
- Code for paper "SrcMarker: Dual-Channel Source Code Watermarking via Scalable Code Transformations" (IEEE S&P 2024)☆29Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆51Updated 5 months ago
- ☆35Updated 11 months ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆43Updated 7 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆102Updated 10 months ago
- Bad Characters: Imperceptible NLP Attacks☆35Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆185Updated 6 months ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆46Updated last month
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆23Updated 3 years ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆33Updated last year
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆21Updated 11 months ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- Watermarking Text Generated by Black-Box Language Models☆39Updated last year
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆59Updated 10 months ago
- Code and data of the ACL 2021 paper "Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution"☆16Updated 4 years ago