serval-uni-lu / tabularbenchLinks
TabularBench: Adversarial robustness benchmark for tabular data
☆19Updated 9 months ago
Alternatives and similar repositories for tabularbench
Users that are interested in tabularbench are comparing it to the libraries listed below
Sorting:
- ☆66Updated 4 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 7 months ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆27Updated 3 years ago
- Code for ML Doctor☆90Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated last year
- Code release for DeepJudge (S&P'22)☆51Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- ☆24Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆65Updated 4 years ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 10 months ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆86Updated 4 years ago
- Camouflage poisoning via machine unlearning☆17Updated 2 months ago
- ☆44Updated 2 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆13Updated 4 years ago
- ☆23Updated 3 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Updated 2 years ago
- ☆11Updated 4 years ago
- This repository contains code and data of the paper **On the Limitations of Continual Learning for Malware Classification**, accepted to …☆19Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- ☆32Updated last year
- ☆19Updated 2 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Updated last year
- ☆11Updated 2 years ago
- SaTML'23 paper "Backdoor Attacks on Time Series: A Generative Approach" by Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, and James Bail…☆20Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆79Updated last year
- ☆23Updated last year
- ☆32Updated 3 years ago