YitingQu / meme-evolutionLinks
☆14Updated last year
Alternatives and similar repositories for meme-evolution
Users that are interested in meme-evolution are comparing it to the libraries listed below
Sorting:
- Code and data for our paper "Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark"…☆49Updated 2 years ago
- ☆26Updated 9 months ago
- ☆82Updated 2 months ago
- The official repository of the paper "The Digital Cybersecurity Expert: How Far Have We Come?" presented in IEEE S&P 2025☆23Updated 5 months ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆45Updated 5 months ago
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆12Updated 8 months ago
- Code for paper "SrcMarker: Dual-Channel Source Code Watermarking via Scalable Code Transformations" (IEEE S&P 2024)☆30Updated last year
- Seminar 2022☆21Updated 3 weeks ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆52Updated 7 months ago
- Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024☆19Updated 8 months ago
- ☆19Updated last year
- [ISSTA 2025] Unlocking Low Frequency Syscalls in Kernel Fuzzing with Dependency-Based RAG☆46Updated last week
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated 2 years ago
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆21Updated 2 months ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- ☆16Updated 2 years ago
- Fingerprint large language models☆45Updated last year
- ☆15Updated last year
- Official repo for FSE'24 paper "CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking"☆16Updated 7 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 4 months ago
- ☆18Updated 4 years ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆210Updated 8 months ago
- A list of recent adversarial attack and defense papers (including those on large language models)☆43Updated this week
- Watermarking Text Generated by Black-Box Language Models☆39Updated last year
- ☆123Updated last year
- Hidden backdoor attack on NLP systems☆47Updated 3 years ago
- ☆17Updated last year
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Updated 3 years ago
- This is a benchmark for evaluating the vulnerability discovery ability of automated approaches including Large Language Models (LLMs), de…☆74Updated 11 months ago
- Accepted by IJCAI-24 Survey Track☆222Updated last year