nikosgalanis / data-poisoning-defense-flLinks
ππ Novel algorithm for defending against Data Poisoning Attacks in a Federated Learning scenario
β22Updated last year
Alternatives and similar repositories for data-poisoning-defense-fl
Users that are interested in data-poisoning-defense-fl are comparing it to the libraries listed below
Sorting:
- Bachelor's Thesis on Adversarial Machine Learning Attacks and Defencesβ16Updated 2 years ago
- ππ Creating, Analyzing and Testing Differential Privacy Protocols, aiming in Data Protection and Anonymization.β17Updated 3 years ago
- Bachelor's Thesis on Membership Inference Attacksβ12Updated 2 years ago
- Stock-Manager is an app, developed in java,that helps the user organise his stocks in one place and keep track of their earnings through β¦β9Updated 5 years ago
- The project aims to evaluate the vulnerability of Federated Learning systems to targeted data poisoning attack known as Label Flipping Atβ¦β18Updated 3 years ago
- A Generic Complete Binary Tree implementation , with O(1) Amortized Complexity in Insertion & O(1) Complexity in Removal of last node andβ¦β2Updated 4 years ago
- Efficient Parallel code in MPI, MPI+OpenMP and CUDA for Game of Lifeβ15Updated 3 years ago
- A fully functional Data Mining project based on movies and shows from Netflix.β13Updated 3 years ago
- ππSet of assignments created for the course System Programming, aiming to familiarize with more complicated use cases of C language.β9Updated 4 years ago
- ππExploiting and fixing security vulnerabilities in an old version of eClassβ10Updated 4 years ago
- LSH/Hypercube kNN and KMeans++ Clustering on polygonic curves and time seriesβ15Updated 3 years ago
- Papers related to federated learning in top conferences (2020-2024).β69Updated 8 months ago
- FL-Defender: Combating Targeted Attacks in Federated Learningβ1Updated 2 years ago
- This repository contains PyTorch implementation of the paper ''LFighter: Defending against Label-flipping Attacks in Federated Learning''β¦β14Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle modelβ22Updated 4 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistencyβ12Updated 2 years ago
- Federated Learning and Membership Inference Attacks experiments on CIFAR10β22Updated 5 years ago
- IEEE TIFS'20: VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learningβ24Updated 2 years ago
- OLIVE: Oblivious and Differentially Private Federated Learning on TEEβ16Updated 2 years ago
- π₯π΄Pancake sorting is a problem of sorting a disordered stack of pancakes in order of size when a spatula can be inserted at any point iβ¦β11Updated 6 years ago
- β11Updated 11 months ago
- β14Updated last year
- β55Updated 2 years ago
- β38Updated 4 years ago
- MiniJava to LLVM IR compilerβ15Updated 2 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clienβ¦β84Updated 2 years ago
- β15Updated last year
- β19Updated 2 years ago
- Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enfoβ¦β41Updated last year
- Membership Inference Attack on Federated Learningβ12Updated 3 years ago