haowang02 / TransTrojLinks
[WWW '25] Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability
☆16Updated 5 months ago
Alternatives and similar repositories for TransTroj
Users that are interested in TransTroj are comparing it to the libraries listed below
Sorting:
- ☆25Updated 2 years ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Updated last year
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆221Updated last year
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆32Updated 4 years ago
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- ☆68Updated 5 months ago
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 4 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆129Updated 11 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Updated 10 months ago
- ☆20Updated 3 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Updated last year
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆19Updated 10 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆278Updated 9 months ago
- ☆36Updated last year
- ☆53Updated last year
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- Accept by CVPR 2025 (highlight)☆20Updated 5 months ago
- ☆15Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆40Updated last year
- ☆67Updated 10 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆63Updated 10 months ago
- Composite Backdoor Attacks Against Large Language Models☆20Updated last year
- Code and full version of the paper "Hijacking Attacks against Neural Network by Analyzing Training Data"☆14Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆102Updated 3 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆63Updated 2 years ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆11Updated 3 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 4 months ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆58Updated 2 years ago
- ☆15Updated last year