☆44Apr 25, 2023Updated 2 years ago
Alternatives and similar repositories for MLHospital
Users that are interested in MLHospital are comparing it to the libraries listed below
Sorting:
- ☆15Apr 7, 2023Updated 2 years ago
- Code for ML Doctor☆92Aug 14, 2024Updated last year
- ☆163Jan 24, 2025Updated last year
- ☆20Oct 28, 2025Updated 4 months ago
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆33May 25, 2021Updated 4 years ago
- This is a python script to generate nice bibtex file for latex.☆18Mar 1, 2020Updated 6 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- ☆10Oct 13, 2022Updated 3 years ago
- This repository contains the implementation of DPMLBench: Holistic Evaluation of Differentially Private Machine Learning☆11Nov 24, 2023Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Nov 18, 2024Updated last year
- Tool to perform differential fault analysis attack (DFA) on whiteboxes with external encodings.☆16Feb 10, 2023Updated 3 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- Adversarial Augmentation Against Adversarial Attacks☆32May 23, 2023Updated 2 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆35Jan 9, 2023Updated 3 years ago
- ☆60Mar 9, 2023Updated 2 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- ☆15Aug 29, 2023Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated last year
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- Single Image Backdoor Inversion via Robust Smoothed Classifiers☆17Jul 18, 2023Updated 2 years ago
- Modular Adversarial Robustness Toolkit☆21Jul 11, 2025Updated 7 months ago
- ☆19Mar 26, 2022Updated 3 years ago
- This is the repositoary for our paper published at ICML24.☆11Jun 11, 2025Updated 8 months ago
- Code for Backdoor Attacks Against Dataset Distillation☆37Apr 19, 2023Updated 2 years ago
- competition☆17Aug 1, 2020Updated 5 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆44Feb 18, 2025Updated last year
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- This is the code for semi-supervised robust training (SRT).☆18Mar 24, 2023Updated 2 years ago
- A curated list of academic events on AI Security & Privacy☆168Aug 22, 2024Updated last year
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Jan 28, 2019Updated 7 years ago
- ☆48Feb 8, 2025Updated last year
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆20Aug 15, 2022Updated 3 years ago
- ☆128Sep 25, 2025Updated 5 months ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆75Dec 5, 2022Updated 3 years ago
- This technique modifies image data so that any model trained on it will bear an identifiable mark.☆44Aug 13, 2021Updated 4 years ago
- [ICLR 2022] Official repository for "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning"☆49Jul 20, 2024Updated last year
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago