houshd / TS_Adv
Codebase for the paper "Adversarial Attacks on Time Series"
☆19Updated 6 years ago
Alternatives and similar repositories for TS_Adv
Users that are interested in TS_Adv are comparing it to the libraries listed below
Sorting:
- Codebase for the paper "Adversarial Attacks on Time Series"☆22Updated 6 years ago
- A general method for training cost-sensitive robust classifier☆22Updated 5 years ago
- Implementation of paper "Transferring Robustness for Graph Neural Network Against Poisoning Attacks".☆20Updated 5 years ago
- [ICML 2021] Information Obfuscation of Graph Neural Networks☆36Updated 3 years ago
- ☆10Updated last year
- Code of the paper Fair k-Means Clustering☆13Updated 3 years ago
- Implementation of Adversarial Privacy Graph Embedding in TensorFlow☆19Updated 4 years ago
- Implementation of the paper "Certifiable Robustness and Robust Training for Graph Convolutional Networks".☆43Updated 4 years ago
- Adversarial learning by utilizing model interpretation☆10Updated 6 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Updated 4 years ago
- PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversion☆21Updated 6 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated 2 years ago
- An (imperfect) implementation of wide resnets and Parseval regularization☆9Updated 5 years ago
- Related material on Federated Learning☆26Updated 5 years ago
- ☆22Updated 2 years ago
- DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation☆16Updated 4 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 2 years ago
- The code of paper "Adversarial Label-Flipping Attack and Defense for Graph Neural Networks" (ICDM 2020)☆18Updated 4 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆17Updated 3 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆34Updated 3 years ago
- ☆23Updated 6 years ago
- ☆26Updated 6 years ago
- Research prototype of deletion efficient k-means algorithms☆23Updated 5 years ago
- Adversarial Attacks on Node Embeddings via Graph Poisoning☆59Updated 5 years ago
- Code for the paper "Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation" by Alexander Levine and Soheil Feizi.☆10Updated 2 years ago
- ☆32Updated 7 years ago
- ☆12Updated 5 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆46Updated last year
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆47Updated 6 years ago
- Code for Double Blind CollaborativeLearning (DBCL)☆14Updated 4 years ago