liangzid / LoRD-MEALinks
Source code of the paper: "Yes, My LoRD." Guiding Language Model Extraction with Locality Reinforced Distillation. ACL'25
☆16Updated 3 months ago
Alternatives and similar repositories for LoRD-MEA
Users that are interested in LoRD-MEA are comparing it to the libraries listed below
Sorting:
- Model Extraction(Stealing) Attacks and Defenses on Machine Learning Models Literature☆16Updated 11 months ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆32Updated 4 years ago
- ☆351Updated 2 months ago
- Code for ML Doctor☆91Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- This repo implements several algorithms for learning with differential privacy.☆109Updated 2 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆42Updated 8 months ago
- ☆25Updated last year
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆84Updated 3 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆21Updated 4 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆66Updated 3 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆196Updated 4 years ago
- ☆22Updated 3 years ago
- ☆45Updated 5 years ago
- ☆70Updated 3 years ago
- ☆32Updated last year
- paper code☆26Updated 4 years ago
- Official implementation of the USENIX Security 2024 paper ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks.☆19Updated last year
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- ☆52Updated 2 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆269Updated 7 months ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆128Updated last year
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 2 months ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆18Updated last year
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆40Updated 2 years ago
- Code and full version of the paper "Hijacking Attacks against Neural Network by Analyzing Training Data"☆13Updated last year
- Pytorch implementation of backdoor unlearning.☆21Updated 3 years ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆218Updated last year
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆122Updated 3 years ago
- TextGuard: Provable Defense against Backdoor Attacks on Text Classification☆12Updated last year