serval-uni-lu / tabularbench
TabularBench: Adversarial robustness benchmark for tabular data
☆13Updated last month
Alternatives and similar repositories for tabularbench:
Users that are interested in tabularbench are comparing it to the libraries listed below
- ☆43Updated last year
- Repo for the research paper "Aligning LLMs to Be Robust Against Prompt Injection"☆32Updated last month
- Privacy backdoors☆51Updated 8 months ago
- ☆14Updated this week
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆12Updated 3 years ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆31Updated last week
- Private Evolution: Generating DP Synthetic Data without Training [ICLR 2024, ICML 2024]☆84Updated this week
- ModelDiff: A Framework for Comparing Learning Algorithms☆54Updated last year
- ☆15Updated last month
- Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)☆8Updated 3 years ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆48Updated 5 months ago
- Codebase for information theoretic shapley values to explain predictive uncertainty.This repo contains the code related to the paperWatso…☆19Updated 6 months ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆25Updated last month
- ☆33Updated last year
- Computationally friendly hyper-parameter search with DP-SGD☆23Updated last week
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆19Updated 9 months ago
- Recycling diverse models☆44Updated 2 years ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆27Updated 7 months ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆58Updated 11 months ago
- ☆23Updated last year
- UnifiedUncertaintyCalibration☆11Updated last year
- ☆22Updated 2 years ago
- ☆39Updated last year
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆18Updated 2 years ago
- ☆32Updated last year
- ☆17Updated 2 years ago
- Fluent student-teacher redteaming☆19Updated 5 months ago
- ☆11Updated 2 years ago
- Robust Principles: Architectural Design Principles for Adversarially Robust CNNs☆21Updated last year
- ☆31Updated last year