YitingQu / meme-evolution
☆12Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for meme-evolution
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆56Updated last month
- ☆74Updated 7 months ago
- Code and data for our paper "Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark"…☆47Updated last year
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆31Updated 8 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆29Updated last year
- ☆13Updated 2 years ago
- Bad Characters: Imperceptible NLP Attacks☆35Updated 7 months ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆25Updated last year
- Code for paper "SrcMarker: Dual-Channel Source Code Watermarking via Scalable Code Transformations" (IEEE S&P 2024)☆21Updated 3 months ago
- Accepted by IJCAI-24 Survey Track☆159Updated 2 months ago
- Watermarking Text Generated by Black-Box Language Models☆30Updated 11 months ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆21Updated 2 years ago
- Seminar 2022☆21Updated 3 weeks ago
- ☆20Updated 9 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆28Updated 6 months ago
- Accepted by ECCV 2024☆74Updated last month
- Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆87Updated 6 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆25Updated last week
- ☆19Updated last year
- ☆18Updated 8 months ago
- Machine Learning & Security Seminar @Purdue University☆25Updated last year
- ☆16Updated 2 years ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆46Updated 2 weeks ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆11Updated 9 months ago
- Code for our S&P'21 paper: Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding☆48Updated 2 years ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆183Updated 6 months ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆15Updated last year
- 全球AI攻防挑战赛—赛道一:大模型生图安全疫苗注入第二名解题方案☆17Updated 2 weeks ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆37Updated last year