ZJUICSR / AIcert
☆220Updated 11 months ago
Alternatives and similar repositories for AIcert:
Users that are interested in AIcert are comparing it to the libraries listed below
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆33Updated 5 months ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆112Updated this week
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆33Updated last week
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆260Updated 4 months ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆201Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆94Updated 2 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆124Updated 5 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆252Updated 3 months ago
- ☆20Updated 8 months ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆173Updated 2 years ago
- ☆486Updated 3 weeks ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆17Updated 4 months ago
- ☆25Updated 2 weeks ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆17Updated 3 months ago
- [TDSC 2024] Official code for our paper "FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model"☆15Updated 4 months ago
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models☆134Updated 2 months ago
- ☆51Updated 3 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆56Updated 4 months ago
- 武汉大学本科生毕业设计(论文)LaTeX 模板,2025 网安院版☆22Updated this week
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆349Updated 3 months ago
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆30Updated last year
- ☆30Updated 6 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆149Updated 2 months ago
- ☆81Updated 3 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆115Updated last year
- Composite Backdoor Attacks Against Large Language Models☆13Updated last year
- ☆18Updated 2 years ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆20Updated 7 months ago
- ☆14Updated last year
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆30Updated 3 years ago