qingjiesjtu / USCLinks
This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.
☆58Updated 7 months ago
Alternatives and similar repositories for USC
Users that are interested in USC are comparing it to the libraries listed below
Sorting:
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆16Updated 9 months ago
- ☆12Updated last week
- ☆24Updated 2 years ago
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆73Updated 11 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated last month
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆58Updated 2 years ago
- ☆82Updated 4 years ago
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated last year
- Simulator.☆103Updated 3 months ago
- ☆31Updated 3 years ago
- ☆27Updated 2 years ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆20Updated 4 months ago
- ☆30Updated last year
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆50Updated 9 months ago
- ☆58Updated 2 months ago
- ☆32Updated 9 months ago
- ☆102Updated last year
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated 9 months ago
- ☆18Updated 10 months ago
- ☆13Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆19Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆82Updated 2 years ago
- ☆20Updated last year
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆25Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆16Updated 7 months ago
- A list of recent papers about adversarial learning☆193Updated this week
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆22Updated 4 months ago
- ☆21Updated 2 years ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆43Updated 6 months ago