EleutherAI / radioactive-labLinks
Adapting the "Radioactive Data" paper to work for text models
☆12Updated 5 years ago
Alternatives and similar repositories for radioactive-lab
Users that are interested in radioactive-lab are comparing it to the libraries listed below
Sorting:
- ☆16Updated 4 years ago
- Code for our S&P'21 paper: Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding☆53Updated 3 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆117Updated 2 years ago
- A unified benchmark problem for data poisoning attacks☆161Updated 2 years ago
- Watermarking Deep Neural Networks (USENIX 2018)☆101Updated 5 years ago
- ☆18Updated 4 years ago
- This is the official implementation of our paper 'Black-box Dataset Ownership Verification via Backdoor Watermarking'.☆26Updated 2 years ago
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆23Updated 4 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Updated last year
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- ☆45Updated 2 years ago
- ☆16Updated 3 years ago
- Protect your machine learning models easily and securely with watermarking 🔑☆97Updated last year
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆111Updated last year
- The implement of paper "How to Prove Your Model Belongs to You: A Blind-Watermark based Framework to Protect Intellectual Property of DNN…☆25Updated 5 years ago
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆19Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Updated 4 years ago
- ☆69Updated last year
- Craft poisoned data using MetaPoison☆54Updated 4 years ago
- ☆88Updated 5 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 3 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- ☆50Updated 4 years ago
- ☆19Updated 3 years ago
- Implementation of "Adversarial Frontier Stitching for Remote Neural Network Watermarking" in TensorFlow.☆24Updated 4 years ago
- ☆68Updated 5 years ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆14Updated last year
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆29Updated 5 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆73Updated 7 years ago
- Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.☆35Updated 6 years ago