serval-uni-lu / tabularbenchLinks
TabularBench: Adversarial robustness benchmark for tabular data
☆19Updated 7 months ago
Alternatives and similar repositories for tabularbench
Users that are interested in tabularbench are comparing it to the libraries listed below
Sorting:
- ☆66Updated 4 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 5 months ago
- Code for ML Doctor☆91Updated 11 months ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- ☆44Updated 2 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆48Updated 3 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 8 months ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆50Updated 2 years ago
- Code release for DeepJudge (S&P'22)☆51Updated 2 years ago
- ☆11Updated 2 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆19Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆66Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago
- ☆10Updated 4 years ago
- A Python library for Secure and Explainable Machine Learning☆184Updated last month
- verifying machine unlearning by backdooring☆20Updated 2 years ago
- ☆65Updated last year
- A unified benchmark problem for data poisoning attacks☆156Updated last year
- ☆21Updated 6 months ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆13Updated 4 years ago
- ☆32Updated 3 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Updated 2 years ago
- [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling☆15Updated 6 months ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆127Updated last year
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆78Updated 2 years ago